Last night I was able, for the first time, to test the theory that the iPhone consumes significantly more power when doing 'push' sync with our service. I left a fully charged iPhone on a shelf in a location with 5 bars ATT 3G service, and push enabled (which is the default configuration). This morning, after 12 hours on the shelf, the iPhone's battery level indicator was still at 'full'. At first I suspected that it wasn't actually syncing so I pulled the server log records for the device. These indicated that the phone had completed a 'ping' sync operation roughly every 7 minutes all night, which is exactly what we'd expect it to do. In previous investigations, when users have reported increased battery drain over night, the server logs showed exactly the same thing: normal regular pinging.
Although this is a single data point, it does tell me that whatever is leading to the reports we see of significantly increased batter life is probably a function of the cell radio in the phone, rather than the sync client or some strange interaction with our service. Perhaps when it's in a location with marginal service it burns much more power sending packets or flipping back and forth between towers, for example.
I'm keen to dig deeper into this mystery and to do so we'll need reports from users who either do or do not see major battery drain from an otherwise idle iPhone that's using push sync. Please send any reports to support@nuevasync.com with subject 'iPhone Battery Drain Investigation'.
Sunday, October 26, 2008
Tuesday, October 21, 2008
Update: Service Restored (was: ISP Down Hence Service is Down)
Update: our Internet service was restored a few minutes ago. Service is running normally now.
The ISP that provides connectivity to our servers is currently totally down. Yes, the entire multi-state ISP, affecting many cities and businesses. The outage began a couple of hours ago. They don't know the cause, nor how long it will take to restore service. My own investigation suggests that the problem is at their peering point with the outside world, not anywhere local to here. Needless to say we will be changing ISP as soon as we can. Apologies to our users. Service will be restored as soon as our ISP gets their act together.
The ISP that provides connectivity to our servers is currently totally down. Yes, the entire multi-state ISP, affecting many cities and businesses. The outage began a couple of hours ago. They don't know the cause, nor how long it will take to restore service. My own investigation suggests that the problem is at their peering point with the outside world, not anywhere local to here. Needless to say we will be changing ISP as soon as we can. Apologies to our users. Service will be restored as soon as our ISP gets their act together.
Saturday, October 11, 2008
iPhone 2.1 Freezing Update
I'm sure many users are wondering what's been happening with this issue. Although we're not yet able to say that the problem is 100% resolved, some significant progress has been made. After a detailed analysis of server log records from devices belonging to users who reported freezing had happened, our engineering team were able to reliably reproduce the freezing problem.
There are two parts to the freezing syndrome: why the device freezes; and the conditions that led to it getting into the frozen state. On the first part, we believe that there is the potential for deadlock in the iPhone 2.1 sync software. We're also confident that the deadlocking problem will be fixed in the next iPhone software update. We don't know when that will be released.
Freezing seems to occur when a particular set of circumstances arises : a change is pending from Google, but the iPhone times out reading the change from our servers; then later before the device has caught up with that missed change, a second change is made on the device. Having discovered the set of conditions that can lead to the device deadlock, we wondered if we could make changes to the service that would reduce or even eliminate the potential to trigger it. As a result new service code was deployed this past Wednesday. It makes sure that any changes from Google are flushed to the device soon after they are seen. The result is that any device that might have got into the pre-freezing state, where a change was missed due to a timeout, will no longer do so. Unfortunately devices that were already in that state before the new code was deployed can still freeze up. This is because our change only addresses the first stage towards freezeing, not the second, which happens outside our control, on the device.
So far the results are encouraging. The number of users reporting new freezing episodes has dropped significantly. Evidence we are able to gather from server logs is also positive.
However, I don't feel that we can declare complete victory yet. There may be other conditions that can trigger the deadlock than the ones we have studied.
We'd like to determine the best method to un-freeze a device. So far only the 'Reset All Settings' method works reliably for us, although users have reported other methods working for them here (change Neuvasync password, turn on flight mode, etc). If you have thoughts on this please post comments.
There are two parts to the freezing syndrome: why the device freezes; and the conditions that led to it getting into the frozen state. On the first part, we believe that there is the potential for deadlock in the iPhone 2.1 sync software. We're also confident that the deadlocking problem will be fixed in the next iPhone software update. We don't know when that will be released.
Freezing seems to occur when a particular set of circumstances arises : a change is pending from Google, but the iPhone times out reading the change from our servers; then later before the device has caught up with that missed change, a second change is made on the device. Having discovered the set of conditions that can lead to the device deadlock, we wondered if we could make changes to the service that would reduce or even eliminate the potential to trigger it. As a result new service code was deployed this past Wednesday. It makes sure that any changes from Google are flushed to the device soon after they are seen. The result is that any device that might have got into the pre-freezing state, where a change was missed due to a timeout, will no longer do so. Unfortunately devices that were already in that state before the new code was deployed can still freeze up. This is because our change only addresses the first stage towards freezeing, not the second, which happens outside our control, on the device.
So far the results are encouraging. The number of users reporting new freezing episodes has dropped significantly. Evidence we are able to gather from server logs is also positive.
However, I don't feel that we can declare complete victory yet. There may be other conditions that can trigger the deadlock than the ones we have studied.
We'd like to determine the best method to un-freeze a device. So far only the 'Reset All Settings' method works reliably for us, although users have reported other methods working for them here (change Neuvasync password, turn on flight mode, etc). If you have thoughts on this please post comments.
Subscribe to:
Posts (Atom)