Thursday, December 31, 2015

Keep on Climbing - Recap of 2015

It's the last day of 2015 and therefore time for the annual recap of the year. For me it was a great year, on the private side of life as well as the professional life.

I'm still working from home for one of the best interactive agencies in the world with a group of fantastic people. I'm happy that I got back into computer graphics programming this year being able to work in the emerging space of virtual reality with devices like the Oculus Rift, Google Cardboard, Samsung Gear VR and more. Plus the next chapter which is being opened with new innovations in the field of Augmented and Mixed Reality.
I was also able to do a few talks at conferences and honored to speak at the //build developer conference again this year, this time together with my good friend and colleague Laurent.
Oh, and I got promoted to Lead UX Developer as well in 2015.

It was also a great year in my private life. My wife and I had our 10 year wedding anniversary and I love her more every day. Our four daughters are doing fine in (pre-)school and bring us joy every day.
For me personally it was an intense change too when I decided in February 2015 that it's finally time to loose some weight. By now I lost about 35 kilograms (77 lbs) by decreasing the energy input with (unhealthy) food and increasing the energy output with cycling. I likely turned a few more kilograms from fat into muscles through the training as well. I did some sweet bike rides like a nice 113 km tour around Lake Washington and Lake Sammamish in October when I was in the Seattle area and another 100 km+ ride a few days ago climbing into the Erzgebirge (Ore Mountains). In December alone I managed to ride more than 500 km while climbing 5200 meters up. Glad I reached this point considering I had 35 kg more to carry when I started in February.

For the next year I want to reduce my weight just a bit more and I'm also planning to ride my first Mountainbike Cross Country Marathon, the famous Erzgebirge Bike Marathon and some other nice challenges.
In the tech field I hope to be renewed as Windows Platform Dev MVP and being able to work more in the field of VR, AR and MR. I have the feeling it's another UX revolution happening right now. It's an exciting time to be a developer for sure.

Bring it on 2016!

Thursday, December 3, 2015

IdentityMine Post - Client-side caching matters

I was recently part of a project at IdentityMine where worked on a Xamarin application for the Amazon Fire TV Android platform. We faced some challenges which we love to address. One of which was the backend speed; however, this feature was out of our scope. Still, the client application needed to interact with the backend speed. Otherwise, with no data, there is no application.

I wrote a post about the learning experiences for the IdentityMine blog. If you are interested in client side development you might want to read it.

Wednesday, September 23, 2015

We've Come a Long Way Baby - WriteableBitmapEx now on GitHub


The WriteableBitmapEx project is now 6 years old and times have changed since all began. It all started with Silverlight and more XAML flavors were added over time until all XAML flavors were supported including Windows Phone, WPF, WinRT Windows Store XAML, (Windows 10) UWP and still Silverlight.

The version control system and open source services landscape has changed as well and the CodePlex infrastructure has not been too reliable recently. That's why I finally took the time to move the WriteableBitmapEx source code from CodePlex' SVN bridge to Git and make it available on GitHubhttps://github.com/teichgraf/WriteableBitmapEx
Additionally was the open source license changed from Ms-PL to MIT which should simplify the usage as well. See the Codeplex Issue Tracker for a list of features that will be added in the future. Please use the GitHub Issues functionality to add new issues which are not already reported. Of course the latest binaries will continue to be distributed via NuGet.

Friday, May 22, 2015

My Top 5 Wishes for Microsoft Band and Health

You might have seen my previous blog post ZzzZzz - Microsoft Band Sleep Tracking in the Testbed. In that article I compared the Microsoft Band and Health data to a proven scientific method for sleep monitoring. While writing it, a couple of ideas and suggestions came to my mind for the Band in general and some particular for the Sleep tracking feature.






Here's my Top 5:
  1. Pulse Oximeter:
    Oxygen saturation is a key metric for medical analysis. For example the result of sleep apnea can be seen as a drop in oxygen saturation and many other dysfunctions surface through that metric. It's also interesting for fitness and sports in general and some fitness trackers actually provide this data already.
    The Microsoft Band measures the heart rate optically using a photo detector and this is the same principle used for Pulse Oximetry, so the sensor is already there. I think it's just a matter of adding the right algorithm and surfacing the data in Microsoft Health.

  2. Sleep tracking Precise mode:
    I would really love if the Band's sleep tracking had a Precise mode which would take samples more often during a night and was allowed to run down the battery in 8-10h, so I can decide when the precision-battery-tradeoff is applied.

  3. Re-occurring alarms:
    The vibration alarm is awesome but it needs to have a feature so it can be made re-occurring. The Band could just sync with my Windows Phone alarms but should also work when the Band is not connected to the device via Bluetooth.

  4. Smart alarm:
    Although the sleep stages are not 100% reliable it would still be good enough to wake the user up during a certain timeframe when the sleep is not Restful anymore. Lots of fitness trackers provide that feature already which actually have less sensor data.

  5. Notifications:
    Currently the Notification Center and Mail tiles stay at the count until the tiles are opened. This is rather annoying as it is another notification hub which needs to be cleared in addition to the phone. I'm sure it's a technical challenge with the WP APIs but for vNext it would be great if the Mail and Notification Center tile count would also be synced over Bluetooth when those are cleared on the phone.

ZzzZzz - Microsoft Band Sleep Tracking in the Testbed

I'm a perfectionist and hungry for knowledge, so I went to a sleep clinic just to check how well the Microsoft Band sleep tracking is working compared to proven scientific methods.
Unfortunately it's not like that and I actually had to go to the sleep lab since I have some issues there. Anyway, since I had to go to the sleep laboratory in the clinic and I just got my Microsoft Band I took the occasion and also used my Band's sleep tracking feature while being monitored using the lab's Polysomnography sleep study.
After my 2 nights at the clinic I asked the lab staff and physician if I can get some personal printouts of my results. They were all nice people and the printouts provide some good data for comparison with the Microsoft Band and Health data.
A funny anecdote: The lab's senior physician is actually a big Windows Phone enthusiast and even tried Windows 10 for Mobile on his device so we had some good chats, also about HoloLens and how it can help them...

The science of sleeping
Sleep studying is a huge scientific area with a lot of active research and many parts of sleep are still unknown territory. I'm not a sleep research expert but learned a few things recently.
There are a couple of different methods for sleep monitoring / tracking. The gold standard for sleep labs is the Polysomnography (PSG) which uses a bunch of sensor data like EEG, ECG, oxygen saturation, eye movement, breathing sensors, muscle movement, etc. For this 2-3 dozen cables are connected to your body for a PSG sleep study.
Another, more basic method is called Actigraphy which just tracks the movement and maybe a few other parameters so it's only used to get information when a person went to bed and when they got up but it can not provide solid data about sleep stages as those depend mostly on your brain activity. The sleep features of fitness trackers like the Jawbone Up, Fitbit, etc. are basically actigraphs. The folks of Exists wrote a very good blog post about those devices and also link to some research studies about actigraphy and how well it works in comparison to PSG. The Microsoft Band also uses actigraphy plus some additional sensor data like samples of heart frequency. Maybe it's even using the skin temperature sensor as well.

Science basically defines five stages of sleeping: The phase when you are awake (W), the rapid eye movement (REM) phase when you are dreaming and 3 Non-REM phases where NREM1 (N1) is the transition to sleep, NREM2 (N2) is light sleep when the heart rate starts to decrease and NREM3 (N3) defines deep sleep when you are fully relaxed and difficult to awaken.

Enough of the theory...


Analysis of Night 1
The first night at the lab was pretty bad. All those cables and a clinic bed were very distracting so my actual sleep was very short and the phases not at all optimal. Keep this in mind when looking at the graphs. My home sleep data is usually way better.
In the table below I listed the parameters the Band provides and compared those to the Lab data. I omitted the total Duration (TIB), Time to fall asleep and Sleep efficiency since the lab and Band measuring started/ended at different times and those parameters depend on that. The Resting heart rate is also excluded as no clear comparable data is available in the PSG printouts, but looking at the graphs the Band Resting HR is somewhat correct.

Parameter Band Lab PSG Band error [%]
Actual Sleep (TST) [min] 257 319.5 -20%
Light Sleep (N1 + N2 [min] 184 271.6 -32%
Restful Sleep (N3) [min] 73 30.3 141%
Woke up 10 9 11%

As expected does the Band actigraph not provide as accurate data as a PSG and the sleep stages like Light sleep and Restful sleep don't match the actual scientific sleep stages very well.

In the below illustrations I overlaid the Microsoft Band's Health sleep charts (colored bars) with the PSG sleep stage graph (black lines). The black scan of the PSG printout was scaled to match the correct time and size of the colored Band data.


Night 1: Colored Band stages data overlaid with black PSG stages

This overlay is pretty interesting and tells quite a bit. Note the Wake phases match mostly but since the Band sampling only happens every few minutes it does not record higher frequency changes and therefore misses short naps or awake periods.
The REM phase is considered Light sleep by the Band which makes sense. Band's Restful sleep is mostly N3 but also a good portion of N2. I assume the Band only uses the heart frequency to define the Restful stages and since the heart rate starts to decrease in N2 the band labels the Restful stage sometimes in N2 until the body goes into N3. The next two overlays underline this theory.


Night 1: Colored Band stages data overlaid with black PSG heart rate
In the above illustration the PSG heart rate was overlaid on the Band's sleep stages and the heart rate chart. As can be seen the Band's Restful blocks correlate with a longer, lower heart rate period.

The third illustration below has the PSG heart rate on top of the Band's heart rate chart.


Night 1: Band's heart rate data (thick purple line) overlaid with black PSG heart rate

Again, the Band's low sampling frequency for sleep mode does not record higher frequency changes like peeks in heart rate which can be caused by sleep apnea when the body tries to compensate low oxygen saturation. Improving those measurements is one of my top wishes / suggestions for the Microsoft Band / Health service.


Analysis of Night 2
The second night was a bit better and I felt more rested after it. I guess mostly because I was super tired from night 1 and therefore slept better even with all those PSG sensors attached.

Parameter Band Lab PSG Band error [%]
Actual Sleep (TST) [min] 310 298.5 4%
Light Sleep (N1 + N2 [min] 183 199.3 -8%
Restful Sleep (N3) [min] 126 70.1 80%
Woke up 9 8 13%

If the Night 1 and Night 2 data is compared it can be seen that my subjective impression was not wrong. I got more than twice deep sleep (N3) during Night 2.
The Band vs. Lab PSG also seems to have less errors which leads to the conclusion that higher sleep quality produces better Band results. I'm not sure how the Microsoft Health analysis of the Band tracking data actually works but I wouldn't be surprised if it uses Azure's machine learning service and the used training datasets contain mostly healthy and normal sleep patterns.

Night 2: Colored Band stages data overlaid with black PSG stages

Again, the Band labels longer periods of N2 and always N3 as Restful sleep.


Night 2: Colored Band stages data overlaid with black PSG heart rate

Night 2: Band's heart rate data (thick purple line) overlaid with black PSG heart rate


Conclusion
The Microsoft Band sleep tracking feature is a nice gimmick but it's not really surprising it doesn't look great when compared to proven scientific sleep monitoring. The Band is not bad for the total sleep duration data, the actual sleep time and the wake up phases. The different stages like Light and Restful sleep are not terribly wrong either but should not be trusted too much. The analysis of sleep stages requires actual brainwave data which is not provided by the Band sensors. It seems the Band's Restful stage is mostly based on the heart rate and movement.
Of course the Band should not be used to diagnose sleep disorders like sleep apnea due to missing sensors but also the low frequency sampling in sleep mode. Hopefully this can be improved. Still if you feel tired more than usual and the Band / Health sleep patterns are totally off, go see a doctor soon!
In the end the Band is a fitness tracker and not a scientific measurement device. The Band and Microsoft Health's sleep tracking is still a nice feature to get an overall idea about your sleep patterns if you keep the limitations in mind.

Happy tracking!

Wednesday, May 20, 2015

100 done...off to the next 100!

Oh wow, this is my 100th blog post here. Great things happened since I started this blog 6 years ago.
Most importantly were we blessed with our third and fourth daughters. I also got the Microsoft MVP award a few times, first as Silverlight, then Windows Phone Dev, now Windows Platform Dev and became a Nokia Developer Champion. I also joined IdentityMine almost 3 years ago. I judged the Imagine Cup 2011 and German App Campus, spoke at a few conferences like //build 2014, 2015 and wrote a couple articles for different magazines. I also published many releases of my apps and open source projects. And of course posted 100 blog posts with hopefully useful content.

Busy times but good times. I hope this goes on for a while.
Thanks for being part of it!

Tuesday, May 12, 2015

Bridges connect the world - iOS and Android on Windows 10. What?

Bridge to Astoria, OR. Nov 2014
Photo from our road trip to Portland
At the //build 2015 conference Microsoft announced their development stories for Windows 10. This also includes bridges to reuse existing iOS, Android and other investments for Windows 10.
The keynote showed how an existing iOS game was ported to Windows 10 without much effort. The shown one-stop porting solutions for Android and iOS raised the concern why anyone should bother to develop natively for Windows 10 anymore and created quite some confusion and uncertainty in the Windows developer community. A few developers at //build were really worried about those announcements as some of their business is mainly porting iOS and Android apps to Windows Phone.

I think at the moment there's just too less knowledge to make decisions based on the announcements and assumptions floating around. There were two sessions at //build about Project Astoria for Android and Project Islandwood for iOS. Keep in mind both projects are in a alpha/beta stage and aren't publicly released at the moment so there's no public hands-on experience and independent in-depth analysis available yet. It's also worth noting that Project Astoria allows one to build Windows apps for phones but not for all Universal Windows Platforms at the moment. More information is expected during Summer 2015 and you could also try to sign up for an early access program for Astoria and Islandwood. There's also a good post by my friend Alan from AdDuplex. Peter Bright published a good Ars Technica article as well.

In my opinion there's one particular really good use case:
Share existing business logic across platforms and build the UI natively for the platform.
This way a common cross-platform app core is reused and the UI is built natively on top of that layer to provide the best UX. Porting an app completely might work for games with a custom UI and some simple apps, but advanced apps usually leverage too much platform-specific features for a one-stop port to work at all and a great app adapts to the UX of the platform.
Sharing core business logic is actually a common use case I see with our larger clients who want to bring their existing apps to Windows or Windows Phone. In the projects I've been involved over the last few years, the shared portion was usually implemented in C/C++ as it is still one of the best cross-platform options since most platforms support C/C++ in some way. Right now it is still quite an effort to get the custom C/C++ libs from other platforms adapted for Windows Phone compilation/linking, even with elevated API access rights. The cross-platform C++ Clang support in Visual Studio 2015 allows us to just recompile the Android or iOS C/C++ code and could help us a lot for such scenarios.
After writing this blog post I came across this excellent post by the Visual C++ team which describes just that. And this post which provides more details about the Visual Studio 2015 cross-platform C++ features.

As I see it, Xamarin will be the best choice for green field cross-platform projects to share as much code as possible and still provide the best UX. The new Visual Studio 2015 tooling for Windows 10 could help to make the reuse of existing iOS and Android business logic easier for Windows 10 when there's not a desire for a green field.

On the last day of //build I had a nice breakfast and roundtable with an executive of the Windows Dev Platform and of course we talked about the //build announcements. There was actually the same vision about the easier reuse of business logic. I think we made it clear that quite some confusion was created in the dev community around Islandwood, Astoria but we were told Microsoft will release more information to make the vision for those porting tools clearer.

In the end I think there's no need to be worried. Porting tools have never been a one-stop solution so I'm confident Islandwood and Astoria won't be the holy grail either. As usual a keynote also contains a good portion of marketing and software development is never that easy, so always consume such announcements with a grain of salt and wait for final decisions until the expected is delivered for real-world scenarios.
If those tools help us to make the porting of existing iOS and Android apps easier and the Store provides the apps which users are looking for, it's good for all Windows developers in the end.
It's just a matter of adapting to leverage the new tooling and helping clients to create great apps.

Don't Worry, Be Happy!

Friday, May 1, 2015

Magic Moments - Recap of the Holographic Academy

It's April the 30th afternoon and now 24 hours ago when I was lucky enough to be part of the first group of external developers worldwide who received a Holographic Academy training, so getting in-depth hands on HoloLens prototype hardware and actually deploying apps on it during a 4h course.
There was no NDA I had to sign so here's a quick recap. I actually double checked with some of the Microsoft mentors there that no NDA was in place.
They also released a video in the meantime where I have my 3 seconds of fame as well. And my friend Gregor published a German interview with me. I've also written an article for the German print magazine PC Magazin.

tl;dr
The HoloLens is the most impressive device I've ever tried. It's mind blowing. The Holographic academy workshop was fun and accessible for all kinds of developers.

Setting
This event was happening at a hotel nearby the //build conference center. The HoloLens team basically got a whole floor there. They had multiple security checks and lots of guards standing everywhere. I had to lock my stuff including my smartphone, so no photos or notes allowed but I got some nice HoloLens swag in exchange. After passing the 2nd or 3rd security checkpoint a team member measured my IPD (65.5 mm), then I passed another guard and finally was escorted into a fancy room with a few dozen attendees, lots of cameras, many HoloLens team members who helped the attendees as mentors, development PC setups and most importantly lots of HoloLens prototype devices (HL).

Hardware
The HL prototypes already looked very polished and have the computing units integrated. Still, the devices we tried had a clear laser printed label on it: "Prototype" and "Not FCC approved...".
After adjusting the HL to the head, it's quite comfortably to wear, not too heavy nor has any annoying fan noise. Unlike other devices does the HL also work well when the user is wearing glasses, so good for people like me.
So far there are only a few official pieces of information about the hardware available and no further information was shared at the event even though I kept asking the answer was always: "We are not ready to talk about that today." I heard that often but I was allowed to take an in-depth look at the devices. The following is based on my personal analysis so take it with a grain of salt.
There are at least 5 cameras on the front. 1 in the center and 2 on each side where 1 of the 2 is pointing to the side and the other one to the front. I suspect those 4 are the depth cameras. Covering 2 of those 4 with the hands still didn't break the recognition which was impressive. It was hard to judge the minimum and maximum distance it supports but I'd say it was around 0.5 - 5m or just like Kinect v2 in the range of 0.8 - 4m.
The projection screens for the left and right eye are not just flat 2D displays but actually kind of layered screens. I spotted 3 layers for each eye and it seemed like each layer is either for the red, green or blue component. These special screens are key for seamlessly merging the virtual objects into the real-world environment and that's the reason for calling the HoloLens a holographic device. The actual screens were a bit small on the HL prototypes they had available, so the field of view was narrower than expected but it's likely the next generation and consumer devices will improve this. The front also had a few tiny holes which I believe are microphones for the amazing speech recognition and sound analysis. For audio output there are small speakers in the headset band. They provide an immersive spatial sound I've never experienced before. More about that below.
The device is charged and connected to the PC for deploying via Micro USB. There seems to be a small special motherboard connector at the top of the front as well.

Walkthrough
The actual academy was using Unity to create a 3D scene. From within Unity a Visual Studio UAP solution was generated which was then deployed to the HL. Ctrl+F5 onto HoloLens! W00t!
I heard there are different tutorials for the academy. My group was doing the Origami tutorial which means the base was a 3D scene with various Origami paper-like 3D models or Holograms as the presenters called it.
The first chapter covered how to setup the device and running a little demo app where a virtual RC truck could be driven and pushed through the real-world room. On the floor, on the table, on the sofa, on humans, drive the virtual truck everywhere in the room!
The second chapter walked us through the process of getting the base Unity Origami scene deployed onto the device and exploring it from different angles in the room and collide with real-world objects.
The next step involved the addition of the Unity rigid body physics to the virtual 3D objects and adding a virtual indication where one is looking at while rotating the head. It was basically a torus which was projected onto the scenery on top of virtual and real-world objects. This is called Gaze. We also added an Air Tap gesture where one is tapping the index finger into the air. The gesture together with the raycasting of Gaze tells you what object in 3D space was selected and basically works like a spatial touch tap / mouse click interaction trigger. The Gaze is the mouse cursor movement in 3D space if you will and the air tap the mouse click. In the case of the Origami demo it triggered a physics action to let Origami paper balls fall down. All this dynamic functionality was implemented using Unity C# scripts.
After that, speech recognition was added into a C# script and it was very impressive. I just had to add an ordinary string like "Reset Scene" into a script without any pre-defined grammar and it just worked. And my HL device even worked only for my voice which means my HL did not react on the same voice commands another attendee was saying.
Later, spatial audio was added which means playing sounds in the 3D virtual environment for the 3D object interactions. This was mind blowing! These small speakers make the sound appear spatially from the scenery and around it. It's totally different than just headphones. You actually think other people hear the same in the room but they are not.
Then we added so called spatial mapping which means overlaying real-world objects with a virtual layer / net. In the academy we overlaid it just with wireframe polygons but trying it on the HL is a very impressive experience.
In the last chapter we removed parts of the real-world floor by overlaying it with a virtual 3D animation which showed a crack in the floor where Origami birds were flying in the underground. The screens of the HL are bright and saturated enough that it actually overlays the real-world without transparency and one could walk around and look from different angles into the underground. Mind blown again!

Further thoughts
There were more chapters / steps in-between for the Origami tutorial or in different order but I think I covered the keys of it above. I've never really used Unity before and I think it was a good choice by the HoloLens team to use Unity for this //build Holographic Academy as it's accessible for many developers who have never done any 3D computer graphics before and it quickly provides nice results. For myself I'd rather have preferred to get my hands on the underlying Direct3D surface and render my own 3D content to it. Honestly I was a bit bored by Unity so I added my own custom speech commands for more physics fun and explored the real HoloLens API with Visual Studio's Object Browser on the side and found some cool stuff in there.
It is great the HoloLens team lets developers get their hands on real device development less than 100 days after showing the first prototypes to the public. Kudos!

Final words
The HoloLens is the most impressive Augmented Reality experience I've ever tried. It's real, not just smoke and mirrors! There are so many innovations in it beside the seamlessly merging of virtual and real-world with the layered holographic screens: spatial audio, perfect speech recognition, spatial gesture recognition, low latency real-time mapping of the real-world environment and all that fully integrated into a single head mounted device without any external cables needed.
24h after attending the academy my mind is still blown and I can't wait to get my hands dirty on a dev device at home pushing it to the limits with the real SDK and Direct3D.
Exciting times!

Tuesday, March 31, 2015

Staying Alive! - WriteableBitmapEx 1.5 is out

After a couple of minor updates on top of version 1.0 which lead to 1.0.14, I'm happy to announce that WriteableBitmapEx 1.5 is now available.
Many contributions were integrated and lots of bugs fixed. Among those are some nice new color modifications and also long awaited DrawLine with variable thickness, pen support, better anti-aliased lines, Cohen-Sutherland line clipping, even-odd polygon filling and alpha-blended shape filling... Read the details at the end of this post or the release notes.

WriteableBitmapEx supports a variety of Windows platforms and versions: WPF, Silverlight, Windows 10 Universal App Platform (UAP), Windows 8/8.1, Windows Phone WinRT and Silverlight 7/8/8.1.


You can download the binaries here or via the NuGet package. The packages contain the WriteableBitmapEx binaries. All samples and the source code can be found in the repository.

A big thank you to all the contributors, bug reporters and users of the library who helped to shape this. You rock!

Changes
  • Added lots of contributions including DrawLine with variable thickness, penning, improved anti-aliasing and Wu's anti-aliasing algorithm
  • Added usage of CohenSutherland line clipping for DrawLineAa and DrawLine, etc.
  • Added support for alpha blended filled shapes and adapted the FillSample for WPF
  • Added FillPolygonsEvenOdd() which uses the even-odd algorithm to fill complex polygons with more than one closed outline like for the letter O
  • Added AdjustBrightness(), AdjustContrast() and AdjustGamma() methods
  • Added Gray() method which returns the gray scaled version the bitmap
  • Fixed regression issue with alpha blending for Blit for non-WinRT
  • Fixed bug in Blit Alpha code for WPF when source format is not pre-multiplied alpha
  • Fixed bug #21778 where FromStream for WPF needs to be called inside Init scope
  • Fixed issue with IndexOutOfRangeEx in DrawLine method
  • Fixed Invalidate for Silverlight BitmapContext.Dispose
  • Fixed many more reported issues
  • ...