Tonight I continued my quest for getting the EACH and EVERY point displayed on the map.
I didn't see Shawn today, as he stayed home to tinker more with PHPme[PHP mapEngine]. So he kept the phone - at least until wednesday.
I wanted to test the new software I wrote on the weekend, so I used my phone to give it a test run.
Good news and Bad.
The good - every little point is accounted for.
Also a feature addition requested by Shawn, is to have the unit remember the device with which it was last paired, thus preventing the user from needing to find his/her device every time.
So once the device is started for the first time - it will start looking for new devices.
Once you select one - it will add it to the RMS [Record Management Store] on J2ME - (worst data storage method/api ever invented)
Next time a unit starts - we check the datastore, and just connect to the same device.
Voila!
My NMEA parser works like magic - and doesnt skip a beat. Which is great news - given the bugginess of my previous attempt. I used to clean the strings generated by the GPS, as I couldn't synchronize properly - now, it works like a charm.
I did more research on it - and one of the papers i referred to is:
http://www.visualgps.net/Papers/NMEAParser/NMEA%20Parser%20Design.htm
which has a very helpful Flow/State diagram.
now for the bad.
Even though the parser didn't skip a beat, and one of the things i did to simulate a tunnel [or any loss of gpr + gprs signals] is every 10-15 seconds i put the gps unit inside my filing cabinet.
This lead to two things.
1. the bluetooth signal was lost by the phone [not the connection - as the timeout is too great, just the serial dump has ceised to exist. as soon as I would take the phone out - the connection [bluetooth] is resumed and it now gets a new UTC time from the sattelites and sends the new string.
Mind you - the utc time is now several seconds past, which in turn results in a 'gap' if i was travelling.
This being a given scenario for say a tunnel - ALTHOUGH - in the tunnel, we will still be able to maintain GPRS connections (anywhere in sydney anyway). So the UTC time will continue to be updated on the device, and thus sent to the server.
2. the 'old' Jargo prototype/mashup that i had before - wouldn't synchronize with the device past the initialization, or first contact. This has a nice 'buffer' result.
Where-by as soon as turned a gps off - it[jargo+phone] continued to send co-ordinates to the server as if it were still moving. So this gives me an idea on how to solve the following problem/handicap that i discovered which is ALSO a result of points going missing.
whenever the phone/device sends a request to the server, that request has to be processed. then returning a 'point added' to the phone.
this is a nice little problem, as
the device can not continue to send other points until it verifies that the point has been added, as the server will hold the device 'on a leash' until it inserts the data into the database
this has two solutions - one simpler than the other.
1. Create an array/list on the phone, into which new points get added as they come off.
This array can then be queried and have the item's "popped" and sent to the server.
The server can then accomodate for an array of points to be received as 'one' statement.
This is more of a 'controlled' buffer - than the 'bug' one I've had previously.
One mis-direction [i'm sick of the word 'problem'] that i can forsee - is a direct effect this will have on 'real-time' display of a point. Until all points are sent by the device, it is no longer where the data it just sent says he is.
this brings me to solution number:
2. Every time a device sends a request to the server, automatically respond, and THEN deal with the query.
This allows the database to queue all the insert statements, and add them at it's earliest convenience.
This will also lead to a much faster interaction with the server, and not leave the Web interface tied up.
All this brought me then to another quandary [i decided to use a thesaurus for that one].
The performance hit on the web server, and what is the actual load that it can take 'comfortably' - given either scenario.
I coded up a little Load Test. results of which can be seen below.
They will show us the results for a 'slight' load (100 users) slowly [every 5 seconds] increasing how many requests get sent to the server.
We can see that the processor is getting raped [figuratively speaking]
yet the requests that are queued are fairly stable and mostly sit under 20.
mostly if we look at the avg. response time we see that it also is quite consistent
on the other hand if add a few more users [say 10 times - up to 1000]
we get:
Which shows that our avg. response time is much higher - at nearly double the rate.
We also see that the processor is also enjoying itself.
Now.
I have to code the 'new approach' to receiving user requests - and then i'll run some more load tests, to see if there's any difference.