Create Object Detection (CoreML) Model for IOS with Apple’s Turi Create in Windows/Google Colab

After searching from various sources to create a detection model for ios i’ve summarized my work in this blog so that it will be easier for everyone to create any machine learning model for ios even if they don’t have a mac.

Turi Create simplifies the development of custom machine learning models. You don’t have to be a machine learning expert to add recommendations, object detection, image classification, image similarity or activity classification to your app. Easy-to-use: Focus on tasks instead of algorithms.

The purpose of this blog is to train and create a model compatible for ios application even if you don’t have a mac. You can train a model in google colab irrespective of any OS. The advantage of using colab is because of it’s GPU support which makes training super fast.


To make it easier please find my Github repo and use the colab notebook , however there will be some updates to the runtime. The latest version of Python, that works with Turi Create 6.4.1, is 3.7 as of this writing. Download and install, and then follow the instructions in the GitHub repo about setting up the virtual environment to run and install Turi Create via .

Image Setup

This setup uses the Simple Image Annotator from this blog, this will generate a CSV file of all of the image annotations. It is recommended to use one folder for all the images, and then output to a single CSV, otherwise combining multiple CSV files into one file will be required.

I simplified this into one step with since the annotations column requires the (the and training labels above). Combining this into one step made more sense, and made it easier to not have to hardcode a training label name anymore. It will now use the subdirectory name from ( and above), and can prepare any number of objects for detection.

Training the Model

I have changed the script to support Colab environment with GPU support. So instead of previously taking nearly 3 hours to train 1000 iterations. In my test with even more imagery, it took around 17 minutes. Nearly 10 times faster to train! So I bumped up the number of iterations to 2500, which takes about 42 minutes to train on my Radeon Pro 560 with 4 GB.

All that is needed is to set the variable to whatever you want to use in Xcode.

At the end of the train script, you should have a folder and for use to drag and drop into your Xcode project.