Make it Intelligent

Designing apps that learn and adapt

Apple's technological advancements in chip design have made our daily devices incredibly powerful. As developers, we can leverage this hardware to solve previously unimaginable problems.

Artificial Intelligence (AI) provides us with powerful tools to develop applications that significantly impact people’s lives. Within AI, Machine Learning enables our applications to learn from data and make informed decisions or predictions.

Apple provides intuitive tools for developers to support them in creating and using powerful and lightweight machine learning models in their applications, Create ML and Core ML.

Machine Learning - Apple Developer
Create intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.

Developers have access to a set of pre-trained machine learning models provided in Apple's documentation to add advanced functionalities to their applications. When a specialized model is required to evaluate a specific item, they can use Create ML.

With Create ML the creation of machine learning models for a series of different use cases is simple. Integrated into Xcode, the bulk of the work is collecting a strong training dataset. The models created can then be imported into the application and used by Core ML.

Core ML powers many other frameworks and technologies, like the Vision framework and the Natural Language framework.

There are many applications taking advantage of these capabilities to deliver features that make a difference in the lives of people all around the world. They are supporting visually impaired people to recognize objects and avoid obstacles, helping athletes to practice and improve in their sport, and artists to express themselves.

Machine Learning

Creating Models with Create ML

The quality of models in machine learning is essential as they impact the effectiveness and reliability of the applications they power. Great models accurately capture the patterns and relationships in the data, leading to reliable predictions, classifications, or decisions.

Great machine-learning models are effective, reliable, and accurate.

Create ML Explained: Apple’s Toolchain to Build and Train Machine Learning Models
This articles will help you to understand the main features of Create ML and how you can create your own custom machine learning models.

Besides being a powerful tool to support the training of custom machine learning models, Create ML is also present as frameworks (Create ML framework and Create ML Components) to support the automatization of model creation and building on-device personalization.

Using Machine Learning with Core ML

Core ML is the framework responsible for the integration of machine learning models in an app, making predictions and decisions directly on the device.

By using Core ML, developers can integrate the models they have created, meaning that all data processing happens locally. It quickens the response, preserves privacy, and provides a lot of room for customization.

Core ML Explained: Apple’s Machine Learning Framework
This articles will help you to assess and judge the main features of More ML and how you can leverage machine learning in your apps.

It is also possible to convert models trained with a variety of tools, like TensorFlow and PyTorch, to the format supported by Core ML using Core ML Tools.

Explore the Vision Framework

Vision is the Apple framework that allows the processing and analysis of images and videos using the default Apple models or customized models created with Create ML.

The framework offers more than 25 different requests for tasks like detection, tracking, recognition, and evaluation. Among those you can find:

The following articles cover features offered by the Vision framework:

Detecting the contour of the edges of an image with the Vision framework
Learn how to detect and draw the edges of an image using the Vision framework in a SwiftUI app.
Classifying image content with the Vision framework
Learn how to use the Vision framework to classify images in a SwiftUI application.
Removing image background using the Vision framework
Learn how to use the Vision framework to easily remove the background of images in a SwiftUI application.

Explore the Natural Language Framework

The Natural Language framework allows to segment text into units - paragraphs, sentences, words - and tag each of them with linguistic information including part of speech, lexical class, lemma, script, and language.

The following articles cover features by the Natural Language framework:

Identifying the Language in a Text using the Natural Language Framework
By the end of this article, you will be able to identify the dominant language in a piece of text using the Natural Language framework.
Applying sentiment analysis using the Natural Language framework
Use the Natural Language framework from Apple to apply sentiment analysis to text.
Lexical classification with the Natural Language framework
Learn how to identify nouns, adjectives, and more with the Natural Language framework on a SwiftUI app.

Explore the Speech Framework

The Speech framework provides features to support the recognition of spoken words, such as verbal commands or dictation, from live audio from the device’s microphone or prerecorded audio, and convert them into transcribed text.

Transcribing audio from a file using the Speech framework
Learn how to transcribe text from an audio file using the Speech framework in a SwiftUI application.

Explore the Sound Analysis Framework

The Sound Analysis framework is designed to analyze and identify specific sounds within audio content. This powerful tool enables applications to recognize and differentiate between various sounds, significantly enhancing the user experience.

For example, it powers the sound recognition accessibility features in iOS, allowing the device to detect and notify users of important sounds like doorbells, alarms, and crying babies. The framework can identify over 300 types of sounds and supports custom Core ML models, providing tailored sound recognition capabilities.

Identify individual sounds in a live audio buffer
Learn how to create a SwiftUI app that uses the Sound Analysis framework to identify sounds with an audio buffer.