Four weeks ago, we welcomed eleven users into the first Soft CGM beta for Daibetter, formerly TempoHealth, LLC. If you're just tuning in, you can get more background on our diabetes management tool here. We had many goals for this particular beta, not the least of which was to put a potential product into the hands of users nationwide. In that respect, alone, we succeeded and the users were able to input their information and get more than 1600 blood glucose predictions in four weeks. In no particular order, here were some of the goals and how we’d rate our own success:
Watch: Soft CGM Beta Wrap-Up
Posted by Aspire Ventures on June 8, 2016
1.) Have an intuitive, easy to use app. If the app were trying to do nothing more than be a diabetes logbook, we’d be declaring victory. A lot of effort went into the development of our interface. The user feedback, as well as their willingness to regularly use the app for the entire four weeks, spoke to our success here. While we will certainly refine the UX as we move forward, our team really hit it out of the park on this one.
2.) Have an app that’s speedy as the day is long. We knew coming out of alpha mode that as Soft CGM gets more data to process, it takes more time to process the data and the biggest user gripe was the speed of the app when opening. Midway through beta one, we made a lot of progress in this area but - following development protocol - didn’t introduce these improvements into beta one. Users of beta two should notice dramatic improvement here.
3.) Have an app that we (the internal team) at Aspire can quickly evaluate for effectiveness. The internal Soft CGM portal - most of which was developed during the beta - is incredible and has given us exactly what we need to evaluate the accuracy of Soft CGM. This will enable us, moving forward, to see exactly where we are (or aren’t) improving.
4.) Accurately predict blood glucose values. We mentioned at the start of the beta that the first CGMs were approved by the FDA with an overall Mean Absolute Relative Difference (MARD) accuracy score of 80%. In our own alpha tests, we had some week-long periods of more than 80%. That said, these internal alpha testers (disclosure: I was one of them) certainly knew how to “game” the app for better performance, which was why it was critical to have the app tested by people whom had never used it before.
In the end, our most accurate user during the beta period had a MARD of 73%, with two other users right at 70%. Most ended up between 62% and 67%. Two were closer to 50%.
We also looked at the Clarke Error Grid for accuracy, with a goal to have blood glucose predictions be in Zones A and B as much as possible. Overall, we managed to do this more than 85% of the time, but one user ended up there 97% of time - comparable to CGMs currently approved by the FDA.
As a first pass, these results are encouraging, even more so now that our team can analyze and improve our adaptive algorithms. In fact, there are so many ways we can improve the algorithms, that our immediate course is to figure what to implement prior to our next beta.
View our final video on the Beta 1results for Soft CGM here.