Apple's Vision & Machine Learning - Simplified

 


I am adding vision to my Sidekick bots. But for a start, I'd like to evaluate the technology first. 

The Goal

The goal in this article is to simplify the building block for using Apple’s Machine Learning framework and its Vision framework to a bear beer bare minimum - get it to work and easy to understand. Then, from thereon we can expand our understanding to tackle more complex work.

The plan

  1. Use Xcode Developer Tool to train ML Model to classify hand-posture images
  2. Get images from device’s camera
  3. Use our trained ML model to classify the images captured from the camera
  4. Re-train our ML model, i.e. make it more accurate
  5. Bob is your uncle and the world is our oyster. Let’s do something cool about this!