I am adding vision to my Sidekick bots. But for a start, I'd like to evaluate the technology first.
The Goal
The goal in this article is to simplify the building block for using Apple’s Machine Learning framework and its Vision framework to a bear beer bare minimum - get it to work and easy to understand. Then, from thereon we can expand our understanding to tackle more complex work.
The plan
- Use Xcode Developer Tool to train ML Model to classify hand-posture images
- Get images from device’s camera
- Use our trained ML model to classify the images captured from the camera
- Re-train our ML model, i.e. make it more accurate
- Bob is your uncle and the world is our oyster. Let’s do something cool about this!
No comments:
Post a Comment