Make it Intelligent
Designing apps that learn and adapt
Apple's technological advancements in chip design have made our daily devices incredibly powerful. As developers, we can leverage this hardware to solve previously unimaginable problems.
Artificial Intelligence (AI) provides us with powerful tools to develop applications that significantly impact people’s lives. Within AI, Machine Learning enables our applications to learn from data and make informed decisions or predictions.
Apple provides intuitive tools for developers to support them in creating and using powerful and lightweight machine learning models in their applications, Create ML and Core ML.
Developers have access to a set of pre-trained machine learning models provided in Apple's documentation to add advanced functionalities to their applications. When a specialized model is required to evaluate a specific item, they can use Create ML.
With Create ML the creation of machine learning models for a series of different use cases is simple. Integrated into Xcode, the bulk of the work is collecting a strong training dataset. The models created can then be imported into the application and used by Core ML.
Core ML powers many other frameworks and technologies, like the Vision framework and the Natural Language framework.
There are many applications taking advantage of these capabilities to deliver features that make a difference in the lives of people all around the world. They are supporting visually impaired people to recognize objects and avoid obstacles, helping athletes to practice and improve in their sport, and artists to express themselves.
Machine Learning
Creating Models with Create ML
The quality of models in machine learning is essential as they impact the effectiveness and reliability of the applications they power. Great models accurately capture the patterns and relationships in the data, leading to reliable predictions, classifications, or decisions.
Great machine-learning models are effective, reliable, and accurate.
Besides being a powerful tool to support the training of custom machine learning models, Create ML is also present as frameworks (Create ML framework and Create ML Components) to support the automatization of model creation and building on-device personalization.
Using Machine Learning with Core ML
Core ML is the framework responsible for the integration of machine learning models in an app, making predictions and decisions directly on the device.
By using Core ML, developers can integrate the models they have created, meaning that all data processing happens locally. It quickens the response, preserves privacy, and provides a lot of room for customization.
It is also possible to convert models trained with a variety of tools, like TensorFlow and PyTorch, to the format supported by Core ML using Core ML Tools.
Explore the Vision Framework
Vision is the Apple framework that allows the processing and analysis of images and videos using the default Apple models or customized models created with Create ML.
The framework offers more than 25 different requests for tasks like detection, tracking, recognition, and evaluation. Among those you can find:
- Detecting faces and face landmarks;
- Detecting the contours of the edge of an image;
- Tracking human and animal body poses;
- Tracking the trajectory of an object;
- Performing hand tracking;
- Evaluating the aesthetic quality of images;
- Recognizing text.
The following articles cover features offered by the Vision framework:
Explore the Natural Language Framework
The Natural Language framework allows to segment text into units - paragraphs, sentences, words - and tag each of them with linguistic information including part of speech, lexical class, lemma, script, and language.
The following articles cover features by the Natural Language framework:
Explore the Speech Framework
The Speech framework provides features to support the recognition of spoken words, such as verbal commands or dictation, from live audio from the device’s microphone or prerecorded audio, and convert them into transcribed text.
Explore the Sound Analysis Framework
The Sound Analysis framework is designed to analyze and identify specific sounds within audio content. This powerful tool enables applications to recognize and differentiate between various sounds, significantly enhancing the user experience.
For example, it powers the sound recognition accessibility features in iOS, allowing the device to detect and notify users of important sounds like doorbells, alarms, and crying babies. The framework can identify over 300 types of sounds and supports custom Core ML models, providing tailored sound recognition capabilities.