Copyright © 2010-2016 Linux Foundation
Permission is granted to copy, distribute and/or modify this document under the terms of the Creative Commons Attribution-Share Alike 2.0 UK: England & Wales as published by Creative Commons.
Revision History | |
---|---|
Revision 2.1 | April 2016 |
Released with the Yocto Project 2.1 Release. |
Table of Contents
devtool
in Your SDK Workflowdevtool add
Table of Contents
Welcome to the Yocto Project Software Development Kit (SDK) Developer's Guide. This manual provides information that explains how to use both the standard Yocto Project SDK and an extensible SDK to develop applications and images using the Yocto Project. Additionally, the manual also provides information on how to use the popular Eclipse™ IDE as part of your application development workflow within the SDK environment.
Prior to the 2.0 Release of the Yocto Project, application development was primarily accomplished through the use of the Application Development Toolkit (ADT) and the availability of stand-alone cross-development toolchains and other tools. With the 2.1 Release of the Yocto Project, application development has transitioned to within a more traditional SDK and extensible SDK.
A standard SDK consists of the following:
Cross-Development Toolchain: This toolchain contains a compiler, debugger, and various miscellaneous tools.
Libraries, Headers, and Symbols: The libraries, headers, and symbols are specific to the image (i.e. they match the image).
Environment Setup Script:
This *.sh
file, once run, sets up the
cross-development environment by defining variables and
preparing for SDK use.
You can use the standard SDK to independently develop and test code that is destined to run on some target machine.
An extensible SDK consists of everything that the standard SDK has plus tools that allow you to easily add new applications and libraries to an image, modify the source of an existing component, test changes on the target hardware, and easily integrate an application into the OpenEmbedded build system.
SDKs are completely self-contained.
The binaries are linked against their own copy of
libc
, which results in no dependencies
on the target system.
To achieve this, the pointer to the dynamic loader is
configured at install time since that path cannot be dynamically
altered.
This is the reason for a wrapper around the
populate_sdk
and
populate_sdk_ext
archives.
Another feature for the SDKs is that only one set of cross-compiler
toolchain binaries are produced per architecture.
This feature takes advantage of the fact that the target hardware can
be passed to gcc
as a set of compiler options.
Those options are set up by the environment script and contained in
variables such as
CC
and
LD
.
This reduces the space needed for the tools.
Understand, however, that a sysroot is still needed for every target
since those binaries are target-specific.
The SDK development environment consists of the following:
The self-contained SDK, which is an architecture-specific cross-toolchain and matching sysroots (target and native) all built by the OpenEmbedded build system (e.g. the SDK). The toolchain and sysroots are based on a Metadata configuration and extensions, which allows you to cross-develop on the host machine for the target hardware.
The Quick EMUlator (QEMU), which lets you simulate target hardware. QEMU is not literally part of the SDK. You must build and include this emulator separately. However, QEMU plays an important role in the development process that revolves around use of the SDK.
The Eclipse IDE Yocto Plug-in. This plug-in is available for you if you are an Eclipse user. In the same manner as QEMU, the plug-in is not literally part of the SDK but is rather available for use as part of the development process.
Various user-space tools that greatly enhance your application development experience. These tools are also separate from the actual SDK but can be independently obtained and used in the development process.
The Cross-Development Toolchain consists of a cross-compiler, cross-linker, and cross-debugger that are used to develop user-space applications for targeted hardware. This toolchain is created by running a toolchain installer script or through a Build Directory that is based on your Metadata configuration or extension for your targeted device. The cross-toolchain works with a matching target sysroot.
The native and target sysroots contain needed headers and libraries for generating binaries that run on the target architecture. The target sysroot is based on the target root filesystem image that is built by the OpenEmbedded build system and uses the same Metadata configuration used to build the cross-toolchain.
The QEMU emulator allows you to simulate your hardware while running your application or image. QEMU is not part of the SDK but is made available a number of ways:
If you have cloned the poky
Git
repository to create a
Source Directory
and you have sourced the environment setup script, QEMU is
installed and automatically available.
If you have downloaded a Yocto Project release and unpacked it to create a Source Directory and you have sourced the environment setup script, QEMU is installed and automatically available.
If you have installed the cross-toolchain tarball and you have sourced the toolchain's setup environment script, QEMU is also installed and automatically available.
The Eclipse IDE is a popular development environment and it fully supports development using the Yocto Project. When you install and configure the Eclipse Yocto Project Plug-in into the Eclipse IDE, you maximize your Yocto Project experience. Installing and configuring the Plug-in results in an environment that has extensions specifically designed to let you more easily develop software. These extensions allow for cross-compilation, deployment, and execution of your output into a QEMU emulation session. You can also perform cross-debugging and profiling. The environment also supports a suite of tools that allows you to perform remote profiling, tracing, collection of power data, collection of latency data, and collection of performance data.
For information about the application development workflow that uses the Eclipse IDE and for a detailed example of how to install and configure the Eclipse Yocto Project Plug-in, see the "Developing Applications Using Eclipse™" section.
User-space tools, which are available as part of the SDK development environment, can be helpful. The tools include LatencyTOP, PowerTOP, Perf, SystemTap, and Lttng-ust. These tools are common development tools for the Linux platform.
LatencyTOP: LatencyTOP focuses on latency that causes skips in audio, stutters in your desktop experience, or situations that overload your server even when you have plenty of CPU power left.
PowerTOP: Helps you determine what software is using the most power. You can find out more about PowerTOP at https://01.org/powertop/.
Perf: Performance counters for Linux used to keep track of certain types of hardware and software events. For more information on these types of counters see https://perf.wiki.kernel.org/. For examples on how to setup and use this tool, see the "perf" section in the Yocto Project Profiling and Tracing Manual.
SystemTap: A free software infrastructure that simplifies information gathering about a running Linux system. This information helps you diagnose performance or functional problems. SystemTap is not available as a user-space tool through the Eclipse IDE Yocto Plug-in. See http://sourceware.org/systemtap for more information on SystemTap. For examples on how to setup and use this tool, see the "SystemTap" section in the Yocto Project Profiling and Tracing Manual.
Lttng-ust: A User-space Tracer designed to provide detailed information on user-space activity. See http://lttng.org/ust for more information on Lttng-ust.
Fundamentally, the SDK fits into the development process as follows:
The SDK is installed on any machine and can be used to develop applications, images, and kernels. An SDK can even be used by a QA Engineer or Release Engineer. The fundamental concept is that the machine that has the SDK installed does not have to be associated with the machine that has the Yocto Project installed. A developer can independently compile and test an object on their machine and then, when the object is ready for integration into an image, they can simply make it available to the machine that has the Yocto Project. Once the object is available, the image can be rebuilt using the Yocto Project to produce the modified image.
You just need to follow these general steps:
Install the SDK for your target hardware: For information on how to install the SDK, see the "Installing the SDK" section.
Download the Target Image: The Yocto Project supports several target architectures and has many pre-built kernel images and root filesystem images.
If you are going to develop your application on
hardware, go to the
machines
download area and choose a target machine area
from which to download the kernel image and root filesystem.
This download area could have several files in it that
support development using actual hardware.
For example, the area might contain
.hddimg
files that combine the
kernel image with the filesystem, boot loaders, and
so forth.
Be sure to get the files you need for your particular
development process.
If you are going to develop your application and
then run and test it using the QEMU emulator, go to the
machines/qemu
download area.
From this area, go down into the directory for your
target architecture (e.g. qemux86_64
for an Intel®-based
64-bit architecture).
Download kernel, root filesystem, and any other files you
need for your process.
Develop and Test your Application: At this point, you have the tools to develop your application. If you need to separately install and use the QEMU emulator, you can go to QEMU Home Page to download and learn about the emulator. See the "Using the Quick EMUlator (QEMU)" chapter in the Yocto Project Development Manual for information on using QEMU within the Yocto Project.
The remainder of this manual describes how to use both the standard SDK and the extensible SDK. Information also exists in appendix form that describes how you can build, install, and modify an SDK.
Table of Contents
This chapter describes the standard SDK and how to use it. Information covers the pieces of the SDK, how to install it, and presents several task-based procedures common for developing with a standard SDK.
The Standard SDK provides a cross-development toolchain and libraries tailored to the contents of a specific image. You would use the Standard SDK if you want a more traditional toolchain experience.
The installed Standard SDK consists of several files and directories. Basically, it contains an SDK environment setup script, some configuration files, and host and target root filesystems to support usage. You can see the directory structure in the "Installed Standard SDK Directory Structure" section.
The first thing you need to do is install the SDK on your host
development machine by running the *.sh
installation script.
You can download a tarball installer, which includes the
pre-built toolchain, the runqemu
script, and support files from the appropriate directory under
http://downloads.yoctoproject.org/releases/yocto/yocto-2.1/toolchain/.
Toolchains are available for 32-bit and 64-bit x86 development
systems from the i686
and
x86_64
directories, respectively.
The toolchains the Yocto Project provides are based off the
core-image-sato
image and contain
libraries appropriate for developing against that image.
Each type of development system supports five or more target
architectures.
The names of the tarball installer scripts are such that a string representing the host system appears first in the filename and then is immediately followed by a string representing the target architecture.
poky-glibc-host_system
-image_type
-arch
-toolchain-release_version
.sh Where:host_system
is a string representing your development system: i686 or x86_64.image_type
is the image for which the SDK was built.arch
is a string representing the tuned target architecture: i586, x86_64, powerpc, mips, armv7a or armv5terelease_version
is a string representing the release number of the Yocto Project: 2.1, 2.1+snapshot
For example, the following toolchain installer is for a 64-bit
development host system and a i586-tuned target architecture
based off the SDK for core-image-sato
and
using the current 2.1 snapshot:
poky-glibc-x86_64-core-image-sato-i586-toolchain-2.1.sh
The SDK and toolchains are self-contained and by default are installed
into /opt/poky
.
However, when you run the SDK installer, you can choose an
installation directory.
$ chmod +x poky-glibc-x86_64-core-image-sato-i586-toolchain-2.1.sh
The following command shows how to run the installer given a
toolchain tarball for a 64-bit x86 development host system and
a 32-bit x86 target architecture.
The example assumes the toolchain installer is located in
~/Downloads/
.
$ ./poky-glibc-x86_64-core-image-sato-i586-toolchain-2.1.sh Poky (Yocto Project Reference Distro) SDK installer version 2.0 =============================================================== Enter target directory for SDK (default: /opt/poky/2.1): You are about to install the SDK to "/opt/poky/2.1". Proceed[Y/n]? Y Extracting SDK.......................................................................done Setting it up...done SDK has been successfully set up and is ready to be used. Each time you wish to use the SDK in a new shell session, you need to source the environment setup script e.g. $ . /opt/poky/2.1/environment-setup-i586-poky-linux
Again, reference the "Installed Standard SDK Directory Structure" section for more details on the resulting directory structure of the installed SDK.
Once you have the SDK installed, you must run the SDK environment setup script before you can actually use it. This setup script resides in the directory you chose when you installed the SDK. For information on where this setup script can reside, see the "Obtaining the SDK" Appendix.
Before running the script, be sure it is the one that matches the
architecture for which you are developing.
Environment setup scripts begin with the string
"environment-setup
" and include as part of their
name the tuned target architecture.
For example, the command to source a setup script for an IA-based
target machine using i586 tuning and located in the default SDK
installation directory is as follows:
$ source /opt/poky/2.1/environment-setup-i586-poky-linux
When you run the setup script, many environment variables are defined:
SDKTARGETSYSROOT
- The path to the sysroot used for cross-compilationPKG_CONFIG_PATH
- The path to the target pkg-config filesCONFIG_SITE
- A GNU autoconf site file preconfigured for the targetCC
- The minimal command and arguments to run the C compilerCXX
- The minimal command and arguments to run the C++ compilerCPP
- The minimal command and arguments to run the C preprocessorAS
- The minimal command and arguments to run the assemblerLD
- The minimal command and arguments to run the linkerGDB
- The minimal command and arguments to run the GNU DebuggerSTRIP
- The minimal command and arguments to run 'strip', which strips symbolsRANLIB
- The minimal command and arguments to run 'ranlib'OBJCOPY
- The minimal command and arguments to run 'objcopy'OBJDUMP
- The minimal command and arguments to run 'objdump'AR
- The minimal command and arguments to run 'ar'NM
- The minimal command and arguments to run 'nm'TARGET_PREFIX
- The toolchain binary prefix for the target toolsCROSS_COMPILE
- The toolchain binary prefix for the target toolsCONFIGURE_FLAGS
- The minimal arguments for GNU configureCFLAGS
- Suggested C flagsCXXFLAGS
- Suggested C++ flagsLDFLAGS
- Suggested linker flags when you use CC to linkCPPFLAGS
- Suggested preprocessor flags
Once you have a suitable cross-toolchain installed, it is very easy to develop a project outside of the OpenEmbedded build system. This section presents a simple "Helloworld" example that shows how to set up, compile, and run the project.
Follow these steps to create a simple Autotools-based project:
Create your directory: Create a clean directory for your project and then make that directory your working location:
$ mkdir $HOME/helloworld $ cd $HOME/helloworld
Populate the directory:
Create hello.c
, Makefile.am
,
and configure.in
files as follows:
For hello.c
, include
these lines:
#include <stdio.h> main() { printf("Hello World!\n"); }
For Makefile.am
,
include these lines:
bin_PROGRAMS = hello hello_SOURCES = hello.c
For configure.in
,
include these lines:
AC_INIT(hello.c) AM_INIT_AUTOMAKE(hello,0.1) AC_PROG_CC AC_PROG_INSTALL AC_OUTPUT(Makefile)
Source the cross-toolchain environment setup file: Installation of the cross-toolchain creates a cross-toolchain environment setup script in the directory that the SDK was installed. Before you can use the tools to develop your project, you must source this setup script. The script begins with the string "environment-setup" and contains the machine architecture, which is followed by the string "poky-linux". Here is an example that sources a script from the default SDK installation directory that uses the 32-bit Intel x86 Architecture and the Krogoth Yocto Project release:
$ source /opt/poky/2.1/environment-setup-i586-poky-linux
Generate the local aclocal.m4
files and create the configure script:
The following GNU Autotools generate the local
aclocal.m4
files and create the
configure script:
$ aclocal $ autoconf
Generate files needed by GNU coding standards: GNU coding standards require certain files in order for the project to be compliant. This command creates those files:
$ touch NEWS README AUTHORS ChangeLog
Generate the configure
file:
This command generates the configure
:
$ automake -a
Cross-compile the project:
This command compiles the project using the cross-compiler.
The
CONFIGURE_FLAGS
environment variable provides the minimal arguments for
GNU configure:
$ ./configure ${CONFIGURE_FLAGS}
Make and install the project: These two commands generate and install the project into the destination directory:
$ make $ make install DESTDIR=./tmp
Verify the installation: This command is a simple way to verify the installation of your project. Running the command prints the architecture on which the binary file can run. This architecture should be the same architecture that the installed cross-toolchain supports.
$ file ./tmp/usr/local/bin/hello
Execute your project: To execute the project in the shell, simply enter the name. You could also copy the binary to the actual target hardware and run the project there as well:
$ ./hello
As expected, the project displays the "Hello World!" message.
For an Autotools-based project, you can use the cross-toolchain by just
passing the appropriate host option to configure.sh
.
The host option you use is derived from the name of the environment setup
script found in the directory in which you installed the cross-toolchain.
For example, the host option for an ARM-based target that uses the GNU EABI
is armv5te-poky-linux-gnueabi
.
You will notice that the name of the script is
environment-setup-armv5te-poky-linux-gnueabi
.
Thus, the following command works to update your project and
rebuild it using the appropriate cross-toolchain tools:
$ ./configure --host=armv5te-poky-linux-gnueabi \
--with-libtool-sysroot=sysroot_dir
configure
script results in problems recognizing the
--with-libtool-sysroot=
sysroot-dir
option,
regenerate the script to enable the support by doing the following and then
run the script again:
$ libtoolize --automake
$ aclocal -I ${OECORE_NATIVE_SYSROOT}/usr/share/aclocal \
[-I dir_containing_your_project-specific_m4_macros
]
$ autoconf
$ autoheader
$ automake -a
For Makefile-based projects, the cross-toolchain environment variables
established by running the cross-toolchain environment setup script
are subject to general make
rules.
To illustrate this, consider the following four cross-toolchain environment variables:
CC=i586-poky-linux-gcc -m32 -march=i586 --sysroot=/opt/poky/2.1/sysroots/i586-poky-linux LD=i586-poky-linux-ld --sysroot=/opt/poky/2.1/sysroots/i586-poky-linux CFLAGS=-O2 -pipe -g -feliminate-unused-debug-types CXXFLAGS=-O2 -pipe -g -feliminate-unused-debug-types
Now, consider the following three cases:
Case 1 - No Variables Set in the Makefile
:
Because these variables are not specifically set in the
Makefile
, the variables retain their
values based on the environment.
Case 2 - Variables Set in the Makefile
:
Specifically setting variables in the
Makefile
during the build results in the
environment settings of the variables being overwritten.
Case 3 - Variables Set when the Makefile
is Executed from the Command Line:
Executing the Makefile
from the command
line results in the variables being overwritten with
command-line content regardless of what is being set in the
Makefile
.
In this case, environment variables are not considered unless
you use the "-e" flag during the build:
$ make -e file
If you use this flag, then the environment values of the
variables override any variables specifically set in the
Makefile
.
If you are familiar with the popular Eclipse IDE, you can use an Eclipse Yocto Plug-in to allow you to develop, deploy, and test your application all from within Eclipse. This section describes general workflow using the SDK and Eclipse and how to configure and set up Eclipse.
The following figure and supporting list summarize the application development general workflow that employs both the SDK Eclipse.
Prepare the host system for the Yocto Project:
See
"Supported Linux Distributions"
and
"Required Packages for the Host Development System" sections both
in the Yocto Project Reference Manual for requirements.
In particular, be sure your host system has the
xterm
package installed.
Secure the Yocto Project kernel target image: You must have a target kernel image that has been built using the OpenEmbedded build system.
Depending on whether the Yocto Project has a pre-built image that matches your target architecture and where you are going to run the image while you develop your application (QEMU or real hardware), the area from which you get the image differs.
Download the image from
machines
if your target architecture is supported and you are going to develop
and test your application on actual hardware.
Download the image from
machines/qemu
if your target architecture is supported
and you are going to develop and test your application using the QEMU
emulator.
Build your image if you cannot find a pre-built image that matches your target architecture. If your target architecture is similar to a supported architecture, you can modify the kernel image before you build it. See the "Patching the Kernel" section in the Yocto Project Development manual for an example.
For information on pre-built kernel image naming schemes for images that can run on the QEMU emulator, see the Yocto Project Software Development Kit (SDK) Developer's Guide.
Install the SDK: The SDK provides a target-specific cross-development toolchain, the root filesystem, the QEMU emulator, and other tools that can help you develop your application. For information on how to install the SDK, see the "Installing the SDK" section.
Secure the target root filesystem and the Cross-development toolchain: You need to find and download the appropriate root filesystem and the cross-development toolchain.
You can find the tarballs for the root filesystem in the same area used for the kernel image. Depending on the type of image you are running, the root filesystem you need differs. For example, if you are developing an application that runs on an image that supports Sato, you need to get a root filesystem that supports Sato.
You can find the cross-development toolchains at
toolchains
.
Be sure to get the correct toolchain for your development host and your
target architecture.
See the "Locating Pre-Built SDK Installers"
section for information and the
"Installing the SDK"
section for installation information.
Create and build your application: At this point, you need to have source files for your application. Once you have the files, you can use the Eclipse IDE to import them and build the project. If you are not using Eclipse, you need to use the cross-development tools you have installed to create the image.
Deploy the image with the application: If you are using the Eclipse IDE, you can deploy your image to the hardware or to QEMU through the project's preferences. If you are not using the Eclipse IDE, then you need to deploy the application to the hardware using other methods. Or, if you are using QEMU, you need to use that tool and load your image in for testing. See the "Using the Quick EMUlator (QEMU)" chapter in the Yocto Project Development Manual for information on using QEMU.
Test and debug the application: Once your application is deployed, you need to test it. Within the Eclipse IDE, you can use the debugging environment along with the set of installed user-space tools to debug your application. Of course, the same user-space tools are available separately if you choose not to use the Eclipse IDE.
The Eclipse IDE is a popular development environment and it fully supports development using the Yocto Project.
When you install and configure the Eclipse Yocto Project Plug-in into the Eclipse IDE, you maximize your Yocto Project experience. Installing and configuring the Plug-in results in an environment that has extensions specifically designed to let you more easily develop software. These extensions allow for cross-compilation, deployment, and execution of your output into a QEMU emulation session as well as actual target hardware. You can also perform cross-debugging and profiling. The environment also supports a suite of tools that allows you to perform remote profiling, tracing, collection of power data, collection of latency data, and collection of performance data.
This section describes how to install and configure the Eclipse IDE Yocto Plug-in and how to use it to develop your application.
To develop within the Eclipse IDE, you need to do the following:
Install the optimal version of the Eclipse IDE.
Configure the Eclipse IDE.
Install the Eclipse Yocto Plug-in.
Configure the Eclipse Yocto Plug-in.
It is recommended that you have the Luna SR2 (4.4.2) version of the Eclipse IDE installed on your development system. However, if you currently have the Kepler 4.3.2 version installed and you do not want to upgrade the IDE, you can configure Kepler to work with the Yocto Project.
If you do not have the Luna SR2 (4.4.2) Eclipse IDE installed, you can find the tarball at http://www.eclipse.org/downloads. From that site, choose the appropriate download from the "Eclipse IDE for C/C++ Developers". This version contains the Eclipse Platform, the Java Development Tools (JDT), and the Plug-in Development Environment.
Once you have downloaded the tarball, extract it into a
clean directory.
For example, the following commands unpack and install the
downloaded Eclipse IDE tarball into a clean directory
using the default name eclipse
:
$ cd ~ $ tar -xzvf ~/Downloads/eclipse-cpp-luna-SR2-linux-gtk-x86_64.tar.gz
This section presents the steps needed to configure the Eclipse IDE.
Before installing and configuring the Eclipse Yocto Plug-in, you need to configure the Eclipse IDE. Follow these general steps:
Start the Eclipse IDE.
Make sure you are in your Workbench and select "Install New Software" from the "Help" pull-down menu.
Select
Luna - http://download.eclipse.org/releases/luna
from the "Work with:" pull-down menu.
Kepler - http://download.eclipse.org/releases/kepler
Expand the box next to "Linux Tools"
and select the
Linux Tools LTTng Tracer Control
,
Linux Tools LTTng Userspace Analysis
,
and
LTTng Kernel Analysis
boxes.
If these selections do not appear in the list,
that means the items are already installed.
LTTng - Linux Tracing Toolkit
box.
Expand the box next to "Mobile and Device Development" and select the following boxes. Again, if any of the following items are not available for selection, that means the items are already installed:
C/C++ Remote Launch (Requires RSE Remote System Explorer)
Remote System Explorer End-user Runtime
Remote System Explorer User Actions
Target Management Terminal (Core SDK)
TCF Remote System Explorer add-in
TCF Target Explorer
Expand the box next to "Programming
Languages" and select the
C/C++ Autotools Support
and C/C++ Development Tools
boxes.
For Luna, these items do not appear on the list
as they are already installed.
Complete the installation and restart the Eclipse IDE.
You can install the Eclipse Yocto Plug-in into the Eclipse IDE one of two ways: use the Yocto Project's Eclipse Update site to install the pre-built plug-in or build and install the plug-in from the latest source code.
To install the Eclipse Yocto Plug-in from the update site, follow these steps:
Start up the Eclipse IDE.
In Eclipse, select "Install New Software" from the "Help" menu.
Click "Add..." in the "Work with:" area.
Enter
http://downloads.yoctoproject.org/releases/eclipse-plugin/2.1/luna
in the URL field and provide a meaningful name
in the "Name" field.
http://downloads.yoctoproject.org/releases/eclipse-plugin/2.1/kepler
in the URL field.
Click "OK" to have the entry added to the "Work with:" drop-down list.
Select the entry for the plug-in from the "Work with:" drop-down list.
Check the boxes next to
Yocto Project ADT Plug-in
,
Yocto Project Bitbake Commander Plug-in
,
and
Yocto Project Documentation plug-in
.
Complete the remaining software installation steps and then restart the Eclipse IDE to finish the installation of the plug-in.
To install the Eclipse Yocto Plug-in from the latest source code, follow these steps:
Be sure your development system is not using OpenJDK to build the plug-in by doing the following:
Use the Oracle JDK. If you don't have that, go to http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html and download the latest appropriate Java SE Development Kit tarball for your development system and extract it into your home directory.
In the shell you are going
to do your work, export the location of
the Oracle Java.
The previous step creates a new folder
for the extracted software.
You need to use the following
export
command
and provide the specific location:
export PATH=~/extracted_jdk_location
/bin:$PATH
In the same shell, create a Git repository with:
$ cd ~ $ git clone git://git.yoctoproject.org/eclipse-poky
Be sure to checkout the correct tag. For example, if you are using Luna, do the following:
$ git checkout luna/yocto-2.1
This puts you in a detached HEAD state, which is fine since you are only going to be building and not developing.
kepler/yocto-2.1
branch.
Change to the
scripts
directory within the Git repository:
$ cd scripts
Set up the local build environment by running the setup script:
$ ./setup.sh
When the script finishes execution,
it prompts you with instructions on how to run
the build.sh
script, which
is also in the scripts
directory of the Git repository created
earlier.
Run the build.sh
script as directed.
Be sure to provide the tag name, documentation
branch, and a release name.
Here is an example that uses the
luna/yocto-2.1
tag, the
master
documentation
branch, and
krogoth
for the
release name:
$ ECLIPSE_HOME=/home/scottrif/eclipse-poky/scripts/eclipse ./build.sh luna/yocto-2.1 master krogoth 2>&1 | tee -a build.log
After running the script, the file
org.yocto.sdk-
release
-
date
-archive.zip
is in the current directory.
If necessary, start the Eclipse IDE and be sure you are in the Workbench.
Select "Install New Software" from the "Help" pull-down menu.
Click "Add".
Provide anything you want in the "Name" field.
Click "Archive" and browse to the
ZIP file you built in step eight.
This ZIP file should not be "unzipped", and must
be the *archive.zip
file
created by running the
build.sh
script.
Click the "OK" button.
Check the boxes that appear in
the installation window to install the
Yocto Project ADT Plug-in
,
Yocto Project Bitbake Commander Plug-in
,
and the
Yocto Project Documentation plug-in
.
Finish the installation by clicking through the appropriate buttons. You can click "OK" when prompted about installing software that contains unsigned content.
Restart the Eclipse IDE if necessary.
At this point you should be able to configure the Eclipse Yocto Plug-in as described in the "Configuring the Eclipse Yocto Plug-in" section.
Configuring the Eclipse Yocto Plug-in involves setting the Cross Compiler options and the Target options. The configurations you choose become the default settings for all projects. You do have opportunities to change them later when you configure the project (see the following section).
To start, you need to do the following from within the Eclipse IDE:
Choose "Preferences" from the "Window" menu to display the Preferences Dialog.
Click "Yocto Project ADT" to display the configuration screen.
To configure the Cross Compiler Options, you must select the type of toolchain, point to the toolchain, specify the sysroot location, and select the target architecture.
Selecting the Toolchain Type:
Choose between
Standalone pre-built toolchain
and
Build system derived toolchain
for Cross Compiler Options.
Standalone Pre-built Toolchain:
Select this mode when you are using
a stand-alone cross-toolchain.
For example, suppose you are an
application developer and do not
need to build a target image.
Instead, you just want to use an
architecture-specific toolchain on
an existing kernel and target root
filesystem.
Build System Derived Toolchain:
Select this mode if the
cross-toolchain has been installed
and built as part of the
Build Directory.
When you select
Build system derived toolchain
,
you are using the toolchain bundled
inside the Build Directory.
Point to the Toolchain: If you are using a stand-alone pre-built toolchain, you should be pointing to where it is installed. See the "Installing the SDK" section for information about how the SDK is installed.
If you are using a system-derived
toolchain, the path you provide for the
Toolchain Root Location
field is the
Build Directory.
See the
"Building an SDK Installer"
section.
Specify the Sysroot Location: This location is where the root filesystem for the target hardware resides.
The location of the sysroot filesystem depends on where you separately extracted and installed the filesystem.
For information on how to install the toolchain and on how to extract and install the sysroot filesystem, see the "Building an SDK Installer" section.
Select the Target Architecture:
The target architecture is the type of hardware
you are going to use or emulate.
Use the pull-down
Target Architecture
menu
to make your selection.
The pull-down menu should have the supported
architectures.
If the architecture you need is not listed in
the menu, you will need to build the image.
See the
"Building Images"
section of the Yocto Project Quick Start for
more information.
You can choose to emulate hardware using the QEMU emulator, or you can choose to run your image on actual hardware.
QEMU: Select this option if you will be using the QEMU emulator. If you are using the emulator, you also need to locate the kernel and specify any custom options.
If you selected
Build system derived toolchain
,
the target kernel you built will be located in
the Build Directory in
tmp/deploy/images/
directory.
If you selected
machine
Standalone pre-built toolchain
,
the pre-built image you downloaded is located
in the directory you specified when you
downloaded the image.
Most custom options are for advanced QEMU
users to further customize their QEMU instance.
These options are specified between paired
angled brackets.
Some options must be specified outside the
brackets.
In particular, the options
serial
,
nographic
, and
kvm
must all be outside the
brackets.
Use the man qemu
command
to get help on all the options and their use.
The following is an example:
serial ‘<-m 256 -full-screen>’
Regardless of the mode, Sysroot is already
defined as part of the Cross-Compiler Options
configuration in the
Sysroot Location:
field.
External HW: Select this option if you will be using actual hardware.
Click the "OK" to save your plug-in configurations.
You can create two types of projects: Autotools-based, or Makefile-based. This section describes how to create Autotools-based projects from within the Eclipse IDE. For information on creating Makefile-based projects in a terminal window, see the "Makefile-Based Projects" section.
To create a project based on a Yocto template and then display the source code, follow these steps:
Select "Project" from the "File -> New" menu.
Double click CC++
.
Double click C Project
to create the project.
Expand Yocto Project ADT Autotools Project
.
Select Hello World ANSI C Autotools Project
.
This is an Autotools-based project based on a Yocto
template.
Put a name in the Project name:
field.
Do not use hyphens as part of the name.
Click "Next".
Add information in the
Author
and
Copyright notice
fields.
Be sure the License
field is correct.
Click "Finish".
If the "open perspective" prompt appears, click "Yes" so that you in the C/C++ perspective.
The left-hand navigation pane shows your project. You can display your source by double clicking the project's source file.
The earlier section, "Configuring the Eclipse Yocto Plug-in", sets up the default project configurations. You can override these settings for a given project by following these steps:
Select "Change Yocto Project Settings" from the "Project" menu. This selection brings up the Yocto Project Settings Dialog and allows you to make changes specific to an individual project.
By default, the Cross Compiler Options and Target Options for a project are inherited from settings you provided using the Preferences Dialog as described earlier in the "Configuring the Eclipse Yocto Plug-in" section. The Yocto Project Settings Dialog allows you to override those default settings for a given project.
Make your configurations for the project and click "OK".
Right-click in the navigation pane and
select "Reconfigure Project" from the pop-up menu.
This selection reconfigures the project by running
autogen.sh
in the workspace for
your project.
The script also runs libtoolize
,
aclocal
,
autoconf
,
autoheader
,
automake --a
, and
./configure
.
Click on the "Console" tab beneath your source code to
see the results of reconfiguring your project.
To build the project select "Build Project" from the "Project" menu. The console should update and you can note the cross-compiler you are using.
Select the project.
Select "Folder" from the
File > New
menu.
In the "New Folder" Dialog, select "Link to alternate location (linked folder)".
Click "Browse" to navigate to the include folder inside the same sysroot location selected in the Yocto Project configuration preferences.
Click "OK".
Click "Finish" to save the linked folder.
To start the QEMU emulator from within Eclipse, follow these steps:
Expose and select "External Tools" from the "Run" menu. Your image should appear as a selectable menu item.
Select your image from the menu to launch the emulator in a new window.
If needed, enter your host root password in
the shell window at the prompt.
This sets up a Tap 0
connection
needed for running in user-space NFS mode.
Wait for QEMU to launch.
Once QEMU launches, you can begin operating
within that environment.
One useful task at this point would be to determine the
IP Address for the user-space NFS by using the
ifconfig
command.
Once the QEMU emulator is running the image, you can deploy your application using the Eclipse IDE and then use the emulator to perform debugging. Follow these steps to deploy the application.
ssh -XY user_name@remote_host_ipAfter running the command, add the command to be executed in Eclipse's run configuration before the application as follows:
export DISPLAY=:10.0
Select "Debug Configurations..." from the "Run" menu.
In the left area, expand
C/C++Remote Application
.
Locate your project and select it to bring up a new tabbed view in the Debug Configurations Dialog.
Enter the absolute path into which you want
to deploy the application.
Use the "Remote Absolute File Path for
C/C++Application:" field.
For example, enter
/usr/bin/
.
programname
Click on the "Debugger" tab to see the cross-tool debugger you are using.
Click on the "Main" tab.
Create a new connection to the QEMU instance by clicking on "new".
Select TCF
, which means
Target Communication Framework.
Click "Next".
Clear out the "host name" field and enter the IP Address determined earlier.
Click "Finish" to close the New Connections Dialog.
Use the drop-down menu now in the "Connection" field and pick the IP Address you entered.
Click "Debug" to bring up a login screen and login.
Accept the debug perspective.
As mentioned earlier in the manual, several tools exist that enhance your development experience. These tools are aids in developing and debugging applications and images. You can run these user-space tools from within the Eclipse IDE through the "YoctoProjectTools" menu.
Once you pick a tool, you need to configure it for the remote target. Every tool needs to have the connection configured. You must select an existing TCF-based RSE connection to the remote target. If one does not exist, click "New" to create one.
Here are some specifics about the remote tools:
Lttng2.0 trace import
:
Selecting this tool transfers the remote target's
Lttng
tracing data back to the
local host machine and uses the Lttng Eclipse plug-in
to graphically display the output.
For information on how to use Lttng to trace an
application,
see http://lttng.org/documentation
and the
"LTTng (Linux Trace Toolkit, next generation)"
section, which is in the Yocto Project Profiling and
Tracing Manual.
Lttng-user space (legacy)
tool.
This tool no longer has any upstream support.
Before you use the
Lttng2.0 trace import
tool,
you need to setup the Lttng Eclipse plug-in and create a
Tracing project.
Do the following:
Select "Open Perspective" from the "Window" menu and then select "Other..." to bring up a menu of other perspectives. Choose "Tracing".
Click "OK" to change the Eclipse perspective into the Tracing perspective.
Create a new Tracing project by selecting "Project" from the "File -> New" menu.
Choose "Tracing Project" from the "Tracing" menu and click "Next".
Provide a name for your tracing project and click "Finish".
Generate your tracing data on the remote target.
Select "Lttng2.0 trace import" from the "Yocto Project Tools" menu to start the data import process.
Specify your remote connection name.
For the Ust directory path, specify
the location of your remote tracing data.
Make sure the location ends with
ust
(e.g.
/usr/mysession/ust
).
Click "OK" to complete the import process. The data is now in the local tracing project you created.
Right click on the data and then use the menu to Select "Generic CTF Trace" from the "Trace Type... -> Common Trace Format" menu to map the tracing type.
Right click the mouse and select "Open" to bring up the Eclipse Lttng Trace Viewer so you view the tracing data.
PowerTOP
:
Selecting this tool runs PowerTOP on the remote target
machine and displays the results in a new view called
PowerTOP.
The "Time to gather data(sec):" field is the time passed in seconds before data is gathered from the remote target for analysis.
The "show pids in wakeups list:" field corresponds
to the -p
argument passed to
PowerTOP
.
LatencyTOP and Perf
:
LatencyTOP identifies system latency, while
Perf monitors the system's performance counter
registers.
Selecting either of these tools causes an RSE terminal
view to appear from which you can run the tools.
Both tools refresh the entire screen to display results
while they run.
For more information on setting up and using
perf
, see the
"perf"
section in the Yocto Project Profiling and Tracing
Manual.
SystemTap
:
Systemtap is a tool that lets you create and reuse
scripts to examine the activities of a live Linux
system.
You can easily extract, filter, and summarize data
that helps you diagnose complex performance or
functional problems.
For more information on setting up and using
SystemTap
, see the
SystemTap Documentation.
yocto-bsp
:
The yocto-bsp
tool lets you
quickly set up a Board Support Package (BSP) layer.
The tool requires a Metadata location, build location,
BSP name, BSP output location, and a kernel
architecture.
For more information on the
yocto-bsp
tool outside of Eclipse,
see the
"Creating a new BSP Layer Using the yocto-bsp Script"
section in the Yocto Project Board Support Package
(BSP) Developer's Guide.
Table of Contents
devtool
in Your SDK Workflowdevtool add
This chapter describes the extensible SDK and how to use it. The extensible SDK makes it easy to add new applications and libraries to an image, modify the source for an existing component, test changes on the target hardware, and ease integration into the rest of the OpenEmbedded build system.
Information in this chapter covers features that are not part of the standard SDK. In other words, the chapter presents information unique to the extensible SDK only. For information on how to use the standard SDK, see the "Using the Standard SDK" chapter.
Getting set up to use the extensible SDK is identical to getting set up to use the standard SDK. You still need to locate and run the installer and then run the environment setup script. See the "Installing the SDK" and the "Running the SDK Environment Setup Script" sections for general information. The following items highlight the only differences between getting set up to use the extensible SDK as compared to the standard SDK:
Default Installation Directory:
By default, the extensible SDK installs into the
poky_sdk
folder of your home directory.
As with the standard SDK, you can choose to install the
extensible SDK in any location when you run the installer.
However, unlike the standard SDK, the location you choose needs
to be writable for whichever users need to use the SDK,
since files will need to be written under that directory during
the normal course of operation.
Build Tools and Build System: The extensible SDK installer performs additional tasks as compared to the standard SDK installer. The extensible SDK installer extracts build tools specific to the SDK and the installer also prepares the internal build system within the SDK. Here is example output for running the extensible SDK installer:
$ ./poky-glibc-x86_64-core-image-minimal-core2-64-toolchain-ext-2.1+snapshot.sh Poky (Yocto Project Reference Distro) Extensible SDK installer version 2.1+snapshot =================================================================================== Enter target directory for SDK (default: ~/poky_sdk): You are about to install the SDK to "/home/scottrif/poky_sdk". Proceed[Y/n]? Y Extracting SDK......................................................................done Setting it up... Extracting buildtools... Preparing build system... done SDK has been successfully set up and is ready to be used. Each time you wish to use the SDK in a new shell session, you need to source the environment setup script e.g. $ . /home/scottrif/poky_sdk/environment-setup-core2-64-poky-linux
After installing the SDK, you need to run the SDK environment setup script. Here is the output:
$ source environment-setup-core2-64-poky-linux SDK environment now set up; additionally you may now run devtool to perform development tasks. Run devtool --help for further details.
Once you run the environment setup script, you have
devtool
available.
devtool
in Your SDK Workflow¶
The cornerstone of the extensible SDK is a command-line tool
called devtool
.
This tool provides a number of features that help
you build, test and package software within the extensible SDK, and
optionally integrate it into an image built by the OpenEmbedded build
system.
The devtool
command line is organized similarly
to
Git in that it has a
number of sub-commands for each function.
You can run devtool --help
to see all the
commands.
Two devtool
subcommands that provide
entry-points into development are:
devtool add
:
Assists in adding new software to be built.
devtool modify
:
Sets up an environment to enable you to modify the source of
an existing component.
As with the OpenEmbedded build system, "recipes" represent software
packages within devtool
.
When you use devtool add
, a recipe is
automatically created.
When you use devtool modify
, the specified
existing recipe is used in order to determine where to get the source
code and how to patch it.
In both cases, an environment is set up so that when you build the
recipe a source tree that is under your control is used in order to
allow you to make changes to the source as desired.
By default, both new recipes and the source go into a "workspace"
directory under the SDK.
The remainder of this section presents the
devtool add
and
devtool modify
workflows.
devtool add
to Add an Application¶
The devtool add
command generates
a new recipe based on existing source code.
This command takes advantage of the
workspace
layer that many devtool
commands
use.
The command is flexible enough to allow you to extract source
code into both the workspace or a separate local Git repository
and to use existing code that does not need to be extracted.
Depending on your particular scenario, the arguments and options
you use with devtool add
form different
combinations.
The following diagram shows common development flows
you would use with the devtool add
command:
Generating the New Recipe:
The top part of the flow shows three scenarios by which
you could use devtool add
to
generate a recipe based on existing source code.
In a shared development environment, it is typical where other developers are responsible for various areas of source code. As a developer, you are probably interested in using that source code as part of your development using the Yocto Project. All you need is access to the code, a recipe, and a controlled area in which to do your work.
Within the diagram, three possible scenarios
feed into the devtool add
workflow:
Left: The left scenario represents a common situation where the source code does not exist locally and needs to be extracted. In this situation, you just let it get extracted to the default workspace - you do not want it in some specific location outside of the workspace. Thus, everything you need will be located in the workspace:
$ devtool add recipe fetchuri
With this command, devtool
creates a recipe and an append file in the
workspace as well as extracts the upstream
source files into a local Git repository also
within the sources
folder.
Middle:
The middle scenario also represents a situation where
the source code does not exist locally.
In this case, the code is again upstream
and needs to be extracted to some
local area - this time outside of the default
workspace.
As always, if required devtool
creates
a Git repository locally during the extraction.
Furthermore, the first positional argument
srctree
in this case
identifies where the
devtool add
command
will locate the extracted code outside of the
workspace:
$ devtool add recipe srctree fetchuri
In summary, the source code is pulled from
fetchuri
and extracted
into the location defined by
srctree
as a local
Git repository.
Within workspace, devtool
creates both the recipe and an append file
for the recipe.
Right:
The right scenario represents a situation
where the source tree (srctree) has been
previously prepared outside of the
devtool
workspace.
The following command names the recipe and identifies where the existing source tree is located:
$ devtool add recipe srctree
The command examines the source code and creates a recipe for it placing the recipe into the workspace.
Because the extracted source code already exists,
devtool
does not try to
relocate it into the workspace - just the new
the recipe is placed in the workspace.
Aside from a recipe folder, the command
also creates an append folder and places an initial
*.bbappend
within.
Edit the Recipe:
At this point, you can use devtool edit-recipe
to open up the editor as defined by the
$EDITOR
environment variable
and modify the file:
$ devtool edit-recipe recipe
From within the editor, you can make modifications to the recipe that take affect when you build it later.
Build the Recipe or Rebuild the Image: At this point in the flow, the next step you take depends on what you are going to do with the new code.
If you need to take the build output and eventually
move it to the target hardware, you would use
devtool build
:
$ devtool build recipe
On the other hand, if you want an image to
contain the recipe's packages for immediate deployment
onto a device (e.g. for testing purposes), you can use
the devtool build-image
command:
$ devtool build-image image
Deploy the Build Output:
When you use the devtool build
command to build out your recipe, you probably want to
see if the resulting build output works as expected on target
hardware.
You can deploy your build output to that target hardware by
using the devtool deploy-target
command:
$ devtool deploy-target recipe target
The target
is a live target machine
running as an SSH server.
You can, of course, also deploy the image you build
using the devtool build-image
command
to actual hardware.
However, devtool
does not provide a
specific command that allows you to do this.
Optionally Update the Recipe With Patch Files:
Once you are satisfied with the recipe, if you have made
any changes to the source tree that you want to have
applied by the recipe, you need to generate patches
from those changes.
You do this before moving the recipe
to its final layer and cleaning up the workspace area
devtool
uses.
This optional step is especially relevant if you are
using or adding third-party software.
To convert commits created using Git to patch files,
use the devtool update-recipe
command.
$ devtool update-recipe recipe
Move the Recipe to its Permanent Layer:
Before cleaning up the workspace, you need to move the
final recipe to its permanent layer.
You must do this before using the
devtool reset
command if you want to
retain the recipe.
Reset the Recipe:
As a final step, you can restore the state such that
standard layers and the upstream source is used to build
the recipe rather than data in the workspace.
To reset the recipe, use the devtool reset
command:
$ devtool reset recipe
devtool modify
to Modify the Source of an Existing Component¶
The devtool modify
command prepares the
way to work on existing code that already has a recipe in
place.
The command is flexible enough to allow you to extract code,
specify the existing recipe, and keep track of and gather any
patch files from other developers that are
associated with the code.
Depending on your particular scenario, the arguments and options
you use with devtool modify
form different
combinations.
The following diagram shows common development flows
you would use with the devtool modify
command:
Preparing to Modify the Code:
The top part of the flow shows three scenarios by which
you could use devtool modify
to
prepare to work on source files.
Each scenario assumes the following:
The recipe exists in some layer external
to the devtool
workspace.
The source files exist upstream in an un-extracted state or locally in a previously extracted state.
The typical situation is where another developer has created some layer for use with the Yocto Project and their recipe already resides in that layer. Furthermore, their source code is readily available either upstream or locally.
Left:
The left scenario represents a common situation
where the source code does not exist locally
and needs to be extracted.
In this situation, the source is extracted
into the default workspace location.
The recipe, in this scenario, is in its own
layer outside the workspace
(i.e.
meta-
layername
).
The following command identifies the recipe and by default extracts the source files:
$ devtool modify recipe
Once devtool
locates the recipe,
it uses the
SRC_URI
variable to locate the source code and
any local patch files from other developers are
located.
srctree
when using the
devtool modify
command.
With this scenario, however, since no
srctree
argument exists, the
devtool modify
command by default
extracts the source files to a Git structure.
Furthermore, the location for the extracted source is the
default area within the workspace.
The result is that the command sets up both the source
code and an append file within the workspace with the
recipe remaining in its original location.
Middle: The middle scenario represents a situation where the source code also does not exist locally. In this case, the code is again upstream and needs to be extracted to some local area as a Git repository. The recipe, in this scenario, is again in its own layer outside the workspace.
The following command tells
devtool
what recipe with
which to work and, in this case, identifies a local
area for the extracted source files that is outside
of the default workspace:
$ devtool modify recipe srctree
As with all extractions, the command uses
the recipe's SRC_URI
to locate the
source files.
Once the files are located, the command by default
extracts them.
Providing the srctree
argument instructs devtool
where
place the extracted source.
Within workspace, devtool
creates an append file for the recipe.
The recipe remains in its original location but
the source files are extracted to the location you
provided with srctree
.
Right:
The right scenario represents a situation
where the source tree
(srctree
) exists as a
previously extracted Git structure outside of
the devtool
workspace.
In this example, the recipe also exists
elsewhere in its own layer.
The following command tells
devtool
the recipe
with which to work, uses the "-n" option to indicate
source does not need to be extracted, and uses
srctree
to point to the
previously extracted source files:
$ devtool modify -n recipe srctree
Once the command finishes, it creates only an append file for the recipe in the workspace. The recipe and the source code remain in their original locations.
Edit the Source:
Once you have used the devtool modify
command, you are free to make changes to the source
files.
You can use any editor you like to make and save
your source code modifications.
Build the Recipe: Once you have updated the source files, you can build the recipe.
Deploy the Build Output:
When you use the devtool build
command to build out your recipe, you probably want to see
if the resulting build output works as expected on target
hardware.
You can deploy your build output to that target hardware by
using the devtool deploy-target
command:
$ devtool deploy-target recipe target
The target
is a live target machine
running as an SSH server.
You can, of course, also deploy the image you build
using the devtool build-image
command
to actual hardware.
However, devtool
does not provide a
specific command that allows you to do this.
Optionally Create Patch Files for Your Changes:
After you have debugged your changes, you can
use devtool update-recipe
to
generate patch files for all the commits you have
made.
$ devtool update-recipe recipe
By default, the
devtool update-recipe
command
creates the patch files in a folder named the same
as the recipe beneath the folder in which the recipe
resides, and updates the recipe's
SRC_URI
statement to point to the generated patch files.
LAYERDIR
"
option to cause the command to create append files
in a specific layer rather than the default
recipe layer.
Restore the Workspace:
The devtool reset
restores the
state so that standard layers and upstream sources are
used to build the recipe rather than what is in the
workspace.
$ devtool reset recipe
devtool add
¶
The devtool add
command automatically creates a
recipe based on the source tree with which you provide it.
Currently, the command has support for the following:
Autotools (autoconf
and
automake
)
CMake
Scons
qmake
Plain Makefile
Out-of-tree kernel module
Binary package (i.e. "-b" option)
Node.js module through
npm
Python modules that use setuptools
or distutils
Apart from binary packages, the determination of how a source tree
should be treated is automatic based on the files present within
that source tree.
For example, if a CMakeLists.txt
file is found,
then the source tree is assumed to be using
CMake and is treated accordingly.
The remainder of this section covers specifics regarding how parts of the recipe are generated.
If you do not specify a name and version on the command
line, devtool add
attempts to determine
the name and version of the software being built from
various metadata within the source tree.
Furthermore, the command sets the name of the created recipe
file accordingly.
If the name or version cannot be determined, the
devtool add
command prints an error and
you must re-run the command with both the name and version
or just the name or version specified.
Sometimes the name or version determined from the source tree might be incorrect. For such a case, you must reset the recipe:
$ devtool reset -n recipename
After running the devtool reset
command,
you need to run devtool add
again and
provide the name or the version.
The devtool add
command attempts to
detect build-time dependencies and map them to other recipes
in the system.
During this mapping, the command fills in the names of those
recipes in the
DEPENDS
value within the recipe.
If a dependency cannot be mapped, then a comment is placed in
the recipe indicating such.
The inability to map a dependency might be caused because the
naming is not recognized or because the dependency simply is
not available.
For cases where the dependency is not available, you must use
the devtool add
command to add an
additional recipe to satisfy the dependency and then come
back to the first recipe and add its name to
DEPENDS
.
If you need to add runtime dependencies, you can do so by adding the following to your recipe:
RDEPENDS_${PN} += "dependency1 dependency2 ..."
devtool add
command often cannot
distinguish between mandatory and optional dependencies.
Consequently, some of the detected dependencies might
in fact be optional.
When in doubt, consult the documentation or the configure
script for the software the recipe is building for further
details.
In some cases, you might find you can substitute the
dependency for an option to disable the associated
functionality passed to the configure script.
The devtool add
command attempts to
determine if the software you are adding is able to be
distributed under a common open-source license and sets the
LICENSE
value accordingly.
You should double-check this value against the documentation
or source files for the software you are building and update
that LICENSE
value if necessary.
The devtool add
command also sets the
LIC_FILES_CHKSUM
value to point to all files that appear to be license-related.
However, license statements often appear in comments at the top
of source files or within documentation.
Consequently, you might need to amend the
LIC_FILES_CHKSUM
variable to point to one
or more of those comments if present.
Setting LIC_FILES_CHKSUM
is particularly
important for third-party software.
The mechanism attempts to ensure correct licensing should you
upgrade the recipe to a newer upstream version in future.
Any change in licensing is detected and you receive an error
prompting you to check the license text again.
If the devtool add
command cannot
determine licensing information, the
LICENSE
value is set to "CLOSED" and the
LIC_FILES_CHKSUM
vaule remains unset.
This behavior allows you to continue with development but is
unlikely to be correct in all cases.
Consequently, you should check the documentation or source
files for the software you are building to determine the actual
license.
The use of make
by itself is very common
in both proprietary and open source software.
Unfortunately, Makefiles are often not written with
cross-compilation in mind.
Thus, devtool add
often cannot do very
much to ensure that these Makefiles build correctly.
It is very common, for example, to explicitly call
gcc
instead of using the
CC
variable.
Usually, in a cross-compilation environment,
gcc
is the compiler for the build host
and the cross-compiler is named something similar to
arm-poky-linux-gnueabi-gcc
and might
require some arguments (e.g. to point to the associated sysroot
for the target machine).
When writing a recipe for Makefile-only software, keep the following in mind:
You probably need to patch the Makefile to use
variables instead of hardcoding tools within the
toolchain such as gcc
and
g++
.
The environment in which make
runs
is set up with various standard variables for
compilation (e.g. CC
,
CXX
, and so forth) in a similar
manner to the environment set up by the SDK's
environment setup script.
One easy way to see these variables is to run the
devtool build
command on the
recipe and then look in
oe-logs/run.do_compile
.
Towards the top of this file you will see a list of
environment variables that are being set.
You can take advantage of these variables within the
Makefile.
If the Makefile sets a default for a variable using "=",
that default overrides the value set in the environment,
which is usually not desirable.
In this situation, you can either patch the Makefile
so it sets the default using the "?=" operator, or
you can alternatively force the value on the
make
command line.
To force the value on the command line, add the
variable setting to
EXTRA_OEMAKE
within the recipe as follows:
EXTRA_OEMAKE += "'CC=${CC}' 'CXX=${CXX}'"
In the above example, single quotes are used around the variable settings as the values are likely to contain spaces because required default options are passed to the compiler.
Hardcoding paths inside Makefiles is often problematic in a cross-compilation environment. This is particularly true because those hardcoded paths often point to locations on the build host and thus will either be read-only or will introduce contamination into the cross-compilation by virtue of being specific to the build host rather than the target. Patching the Makefile to use prefix variables or other path variables is usually the way to handle this.
Sometimes a Makefile runs target-specific commands such
as ldconfig
.
For such cases, you might be able to simply apply
patches that remove these commands from the Makefile.
Often, you need to build additional tools that run on the
build host system as opposed to the target.
You should indicate this using one of the following methods
when you run devtool add
:
Specify the name of the recipe such that it ends with "-native". Specifying the name like this produces a recipe that only builds for the build host.
Specify the "‐‐also-native" option with the
devtool add
command.
Specifying this option creates a recipe file that still
builds for the target but also creates a variant with
a "-native" suffix that builds for the build host.
You can use the devtool add
command in the
following form to add Node.js modules:
$ devtool add "npm://registry.npmjs.org;name=forever;version=0.15.1"
The name and version parameters are mandatory. Lockdown and shrinkwrap files are generated and pointed to by the recipe in order to freeze the version that is fetched for the dependencies according to the first time. This also saves checksums that are verified on future fetches. Together, these behaviors ensure the reproducibility and integrity of the build.
You must use quotes around the URL.
The devtool add
does not require
the quotes, but the shell considers ";" as a splitter
between multiple commands.
Thus, without the quotes,
devtool add
does not receive the
other parts, which results in several "command not
found" errors.
In order to support adding
Node.js modules, a
nodejs
recipe must be part of your
SDK in order to provide Node.js
itself.
When building a recipe with devtool build
the
typical build progression is as follows:
Fetch the source
Unpack the source
Configure the source
Compiling the source
Install the build output
Package the installed output
For recipes in the workspace, fetching and unpacking is disabled as the source tree has already been prepared and is persistent. Each of these build steps is defined as a function, usually with a "do_" prefix. These functions are typically shell scripts but can instead be written in Python.
If you look at the contents of a recipe, you will see that the
recipe does not include complete instructions for building the
software.
Instead, common functionality is encapsulated in classes inherited
with the inherit
directive, leaving the recipe
to describe just the things that are specific to the software to be
built.
A base
class exists that is implicitly inherited by all recipes and provides
the functionality that most typical recipes need.
The remainder of this section presents information useful when working with recipes.
When you are debugging a recipe that you previously created using
devtool add
or whose source you are modifying
by using the devtool modify
command, after
the first run of devtool build
, you will
find some symbolic links created within the source tree:
oe-logs
, which points to the directory in
which log files and run scripts for each build step are created
and oe-workdir
, which points to the temporary
work area for the recipe.
You can use these links to get more information on what is
happening at each build step.
These locations under oe-workdir
are
particularly useful:
image/
:
Contains all of the files installed at the
do_install
stage.
Within a recipe, this directory is referred to by the
expression
${
D
}
.
sysroot-destdir/
:
Contains a subset of files installed within
do_install
that have been put into the
shared sysroot.
For more information, see the
"Sharing Files Between Recipes"
section.
packages-split/
:
Contains subdirectories for each package produced by the
recipe.
For more information, see the
"Packaging" section.
If the software your recipe is building uses GNU autoconf,
then a fixed set of arguments is passed to it to enable
cross-compilation plus any extras specified by
EXTRA_OECONF
set within the recipe.
If you wish to pass additional options, add them to
EXTRA_OECONF
.
Other supported build tools have similar variables
(e.g.
EXTRA_OECMAKE
for CMake,
EXTRA_OESCONS
for Scons, and so forth).
If you need to pass anything on the make
command line, you can use EXTRA_OEMAKE
to do
so.
You can use the devtool configure-help
command
to help you set the arguments listed in the previous paragraph.
The command determines the exact options being passed, and shows
them to you along with any custom arguments specified through
EXTRA_OECONF
.
If applicable, the command also shows you the output of the
configure script's "‐‐help" option as a reference.
Recipes often need to use files provided by other recipes on the build host. For example, an application linking to a common library needs access to the library itself and its associated headers. The way this access is accomplished within the extensible SDK is through the sysroot. One sysroot exists per "machine" for which the SDK is being built. In practical terms, this means a sysroot exists for the target machine, and a sysroot exists for the build host.
Recipes should never write files directly into the sysroot.
Instead, files should be installed into standard locations
during the
do_install
task within the
${
D
}
directory.
A subset of these files automatically go into the sysroot.
The reason for this limitation is that almost all files that go
into the sysroot are cataloged in manifests in order to ensure
they can be removed later when a recipe is modified or removed.
Thus, the sysroot is able to remain free from stale files.
Packaging is not always particularly relevant within the extensible SDK. However, if you examine how build output gets into the final image on the target device, it is important to understand packaging because the contents of the image are expressed in terms of packages and not recipes.
During the
do_package
task, files installed during the
do_install
task are split into one main package, which is almost always named
the same as the recipe, and several other packages.
This separation is done because not all of those installed files
are always useful in every image.
For example, you probably do not need any of the documentation
installed in a production image.
Consequently, for each recipe the documentation files are separated
into a -doc
package.
Recipes that package software that has optional modules or
plugins might do additional package splitting as well.
After building a recipe you can see where files have gone by
looking in the oe-workdir/packages-split
directory, which contains a subdirectory for each package.
Apart from some advanced cases, the
PACKAGES
and
FILES
variables controls splitting.
The PACKAGES
variable lists all of the
packages to be produced, while the FILES
variable specifies which files to include in each package,
using an override to specify the package.
For example, FILES_${PN}
specifies the files
to go into the main package (i.e. the main package is named the
same as the recipe and
${
PN
}
evaluates to the recipe name).
The order of the PACKAGES
value is significant.
For each installed file, the first package whose
FILES
value matches the file is the package
into which the file goes.
Defaults exist for both the PACKAGES
and
FILES
variables.
Consequently, you might find you do not even need to set these
variables in your recipe unless the software the recipe is
building installs files into non-standard locations.
If you use the devtool deploy-target
command to write a recipe's build output to the target, and
you are working on an existing component of the system, then you
might find yourself in a situation where you need to restore the
original files that existed prior to running the
devtool deploy-target
command.
Because the devtool deploy-target
command
backs up any files it overwrites, you can use the
devtool undeploy-target
to restore those files
and remove any other files the recipe deployed.
Consider the following example:
$ devtool undeploy-target lighttpd root@192.168.7.2
If you have deployed multiple applications, you can remove them all at once thus restoring the target device back to its original state:
$ devtool undeploy-target -a root@192.168.7.2
Information about files deployed to the target as well as any backed up files are stored on the target itself. This storage of course requires some additional space on the target machine.
devtool deploy-target
and
devtool undeploy-target
command do not
currently interact with any package management system on the
target device (e.g. RPM or OPKG).
Consequently, you should not intermingle operations
devtool deploy-target
and the package
manager operations on the target device.
Doing so could result in a conflicting set of files.
The extensible SDK typically only comes with a small number of tools
and libraries out of the box.
If you have a minimal SDK, then it starts mostly empty and is
populated on-demand.
However, sometimes you will need to explicitly install extra items
into the SDK.
If you need these extra items, you can first search for the items
using the devtool search
command.
For example, suppose you need to link to libGL but you are not sure
which recipe provides it.
You can use the following command to find out:
$ devtool search libGL mesa A free implementation of the OpenGL API
Once you know the recipe (i.e. mesa
in this
example), you can install it:
$ devtool sdk-install mesa
By default, the devtool sdk-install
assumes the
item is available in pre-built form from your SDK provider.
If the item is not available and it is acceptable to build the item
from source, you can add the "-s" option as follows:
$ devtool sdk-install -s mesa
It is important to remember that building the item from source takes
significantly longer than installing the pre-built artifact.
Also, if no recipe exists for the item you want to add to the SDK, you
must instead add it using the devtool add
command.
If you are working with an extensible SDK that gets occasionally updated (e.g. typically when that SDK has been provided to you by another party), then you will need to manually pull down those updates to your installed SDK.
To update your installed SDK, run the following:
$ devtool sdk-update
The previous command assumes your SDK provider has set the default update URL for you. If that URL has not been set, you need to specify it yourself as follows:
$ devtool sdk-update path_to_update_directory
You might need to produce an SDK that contains your own custom libraries for sending to a third party (e.g., if you are a vendor with customers needing to build their own software for the target platform). If that is the case, then you can produce a derivative SDK based on the currently installed SDK fairly easily. Use these steps:
If necessary, install an extensible SDK that you want to use as a base for your derivative SDK.
Source the environment script for the SDK.
Add the extra libraries or other components
you want by using the devtool add
command.
Run the devtool build-sdk
command.
The above procedure takes the recipes added to the workspace and constructs a new SDK installer containing those recipes and the resulting binary artifacts. The recipes go into their own separate layer in the constructed derivative SDK, leaving the workspace clean and ready for users to add their own recipes.
Table of Contents
You can use existing, pre-built toolchains by locating and running an SDK installer script that ships with the Yocto Project. Using this method, you select and download an architecture-specific toolchain installer and then run the script to hand-install the toolchain.
You can find SDK installers here:
Standard SDK Installers
Go to http://downloads.yoctoproject.org/releases/yocto/yocto-2.1/toolchain/
and find the folder that matches your host development system
(i.e. i686
for 32-bit machines or
x86_64
for 64-bit machines).
Go into that folder and download the toolchain installer
whose name includes the appropriate target architecture.
The toolchains provided by the Yocto Project are based off of
the core-image-sato
image and contain
libraries appropriate for developing against that image.
For example, if your host development system is a 64-bit x86
system and you are going to use your cross-toolchain for a
32-bit x86 target, go into the x86_64
folder and download the following installer:
poky-glibc-x86_64-core-image-sato-i586-toolchain-2.1.sh
Extensible SDK Installers Installers for the extensible SDK are in http://downloads.yoctoproject.org/releases/yocto/yocto-2.1/toolchain/.
As an alternative to locating and downloading a toolchain installer,
you can build the toolchain installer assuming you have first sourced
the environment setup script.
See the
"Building Images"
section in the Yocto Project Quick Start for steps that show you
how to set up the Yocto Project environment.
In particular, you need to be sure the
MACHINE
variable matches the architecture for which you are building and that
the
SDKMACHINE
variable is correctly set if you are building a toolchain designed to
run on an architecture that differs from your current development host
machine (i.e. the build machine).
To build the toolchain installer for a standard SDK and populate the SDK image, use the following command:
$ bitbake image
-c populate_sdk
You can do the same for the extensible SDK using this command:
$ bitbake image
-c populate_sdk_ext
These commands result in a toolchain installer that contains the sysroot that matches your target root filesystem.
When the bitbake
command completes, the toolchain
installer will be in
tmp/deploy/sdk
in the Build Directory.
IMAGE_INSTALL
variable inside your local.conf
file to
install the appropriate library packages.
Following is an example using glibc
static
development libraries:
IMAGE_INSTALL_append = " glibc-staticdev"
After installing the toolchain, for some use cases you might need to separately extract a root filesystem:
You want to boot the image using NFS.
You want to use the root filesystem as the target sysroot. For example, the Eclipse IDE environment with the Eclipse Yocto Plug-in installed allows you to use QEMU to boot under NFS.
You want to develop your target application using the root filesystem as the target sysroot.
To extract the root filesystem, first source
the cross-development environment setup script to establish
necessary environment variables.
If you built the toolchain in the Build Directory, you will find
the toolchain environment script in the
tmp
directory.
If you installed the toolchain by hand, the environment setup
script is located in /opt/poky/2.1
.
After sourcing the environment script, use the
runqemu-extract-sdk
command and provide the
filesystem image.
Following is an example.
The second command sets up the environment.
In this case, the setup script is located in the
/opt/poky/2.1
directory.
The third command extracts the root filesystem from a previously
built filesystem that is located in the
~/Downloads
directory.
Furthermore, this command extracts the root filesystem into the
qemux86-sato
directory:
$ cd ~ $ source /opt/poky/2.1/environment-setup-i586-poky-linux $ runqemu-extract-sdk \ ~/Downloads/core-image-sato-sdk-qemux86-2011091411831.rootfs.tar.bz2 \ $HOME/qemux86-sato
You could now point to the target sysroot at
qemux86-sato
.
The following figure shows the resulting directory structure after
you install the Standard SDK by running the *.sh
SDK installation script:
The installed SDK consists of an environment setup script for the SDK,
a configuration file for the target, a version file for the target,
and the root filesystem (sysroots
) needed to
develop objects for the target system.
Within the figure, italicized text is used to indicate replaceable
portions of the file or directory name.
For example,
install_dir
/version
is the directory where the SDK is installed.
By default, this directory is /opt/poky/
.
And, version
represents the specific
snapshot of the SDK (e.g. 2.1+snapshot
).
Furthermore, target
represents the target
architecture (e.g. i586
) and
host
represents the development system's
architecture (e.g. x86_64
).
Thus, the complete names of the two directories within the
sysroots
could be
i586-poky-linux
and
x86_64-pokysdk-linux
for the target and host,
respectively.
The following figure shows the resulting directory structure after
you install the Extensible SDK by running the *.sh
SDK installation script:
The installed directory structure for the extensible SDK is quite different than the installed structure for the standard SDK. The extensible SDK does not separate host and target parts in the same manner as does the standard SDK. The extensible SDK uses an embedded copy of the OpenEmbedded build system, which has its own sysroots.
Of note in the directory structure are an environment setup script for the SDK, a configuration file for the target, a version file for the target, and a log file for the OpenEmbedded build system preparation script run by the installer.
Within the figure, italicized text is used to indicate replaceable
portions of the file or directory name.
For example,
install_dir
is the directory where the SDK
is installed, which is poky_sdk
by default.
target
represents the target
architecture (e.g. i586
) and
host
represents the development system's
architecture (e.g. x86_64
).
Table of Contents
This appendix presents customizations you can apply to both the standard and extensible SDK. Each subsection identifies the type of SDK to which the section applies.
The extensible SDK primarily consists of a pre-configured copy of
the OpenEmbedded build system from which it was produced.
Thus, the SDK's configuration is derived using that build system and
the following filters, which the OpenEmbedded build system applies
against local.conf
and
auto.conf
if they are present:
Variables whose values start with "/" are excluded since the assumption is that those values are paths that are likely to be specific to the build host.
Variables listed in
SDK_LOCAL_CONF_BLACKLIST
are excluded.
The default value blacklists
CONF_VERSION
,
BB_NUMBER_THREADS
,
PARALLEL_MAKE
,
PRSERV_HOST
,
and
SSTATE_MIRRORS
.
Variables listed in
SDK_LOCAL_CONF_WHITELIST
are included.
Including a variable in the value of
SDK_LOCAL_CONF_WHITELIST
overrides either
of the above two conditions.
The default value is blank.
Classes inherited globally with
INHERIT
that are listed in
SDK_INHERIT_BLACKLIST
are disabled.
Using SDK_INHERIT_BLACKLIST
to disable
these classes is is the typical method to disable classes that
are problematic or unnecessary in the SDK context.
The default value blacklists the
buildhistory
and
icecc
classes.
Additionally, the contents of conf/sdk-extra.conf
,
when present, are appended to the end of
conf/local.conf
within the produced SDK, without
any filtering.
The sdk-extra.conf
file is particularly useful
if you want to set a variable value just for the SDK and not the
OpenEmbedded build system used to create the SDK.
In most cases, the extensible SDK defaults should work. However, some cases exist for which you might consider making adjustments:
If your SDK configuration inherits additional classes
using the
INHERIT
variable and you do not need or want those classes enabled in
the SDK, you can blacklist them by adding them to the
SDK_INHERIT_BLACKLIST
variable.
The default value of SDK_INHERIT_BLACKLIST
is set using the "?=" operator.
Consequently, you will need to either set the complete value
using "=" or append the value using "_append".
If you have classes or recipes that add additional tasks to the standard build flow (i.e. that execute as part of building the recipe as opposed to needing to be called explicitly), then you need to do one of the following:
Ensure the tasks are shared state tasks (i.e. their
output is saved to and can be restored from the shared
state cache), or that the tasks are able to be
produced quickly from a task that is a shared state
task and add the task name to the value of
SDK_RECRDEP_TASKS
.
Disable the tasks if they are added by a class and
you do not need the functionality the class provides
in the extensible SDK.
To disable the tasks, add the class to
SDK_INHERIT_BLACKLIST
as previously
described.
Generally, you want to have a shared state mirror set up so users of the SDK can add additional items to the SDK after installation without needing to build the items from source. See the "Providing Additional Installable Extensible SDK Content" section for information.
If you want users of the SDK to be able to easily update the
SDK, you need to set the
SDK_UPDATE_URL
variable.
For more information, see the
"Providing Updates After Installing the Extensible SDK"
section.
If you have adjusted the list of files and directories that
appear in
COREBASE
(other than layers that are enabled through
bblayers.conf
), then you must list these
files in
COREBASE_FILES
so that the files are copied into the SDK.
If your OpenEmbedded build system setup uses a different
environment setup script other than
oe-init-build-env
or
oe-init-build-env-memres
,
then you must set
OE_INIT_ENV_SCRIPT
to point to the environment setup script you use.
COREBASE_FILES
variable as previously
described.
You can change the title shown by the SDK installer by setting the
SDK_TITLE
variable.
By default, this title is derived from
DISTRO_NAME
when it is set.
If the DISTRO_NAME
variable is not set, the title
is derived from the
DISTRO
variable.
When you make changes to your configuration or to the metadata and
if you want those changes to be reflected in installed SDKs, you need
to perform additional steps to make it possible for those that use
the SDK to update their installations with the
devtool sdk-update
command:
Arrange to be created a directory that can be shared over HTTP or HTTPS.
Set the
SDK_UPDATE_URL
variable to point to the corresponding HTTP or HTTPS URL.
Setting this variable causes any SDK built to default to that
URL and thus, the user does not have to pass the URL to the
devtool sdk-update
command.
Build the extensible SDK normally (i.e., use the
bitbake -c populate_sdk_ext
imagename
command).
Publish the SDK using the following command:
$ oe-publish-sdksome_path
/sdk-installer.shpath_to_shared/http_directory
You must repeat this step each time you rebuild the SDK with changes that you want to make available through the update mechanism.
Completing the above steps allows users of the existing SDKs to
simply run devtool sdk-update
to retrieve the
latest updates.
See the
"Updating the Extensible SDK"
section for further information.
If you want the users of the extensible SDK you are building to be able to add items to the SDK without needing to build the items from source, you need to do a number of things:
Ensure the additional items you want the user to be able to
install are actually built.
You can ensure these items are built a number of different
ways: 1) Build them explicitly, perhaps using one or more
"meta" recipes that depend on lists of other recipes to keep
things tidy, or 2) Build the "world" target and set
EXCLUDE_FROM_WORLD_pn-
recipename
for the recipes you do not want built.
See the
EXCLUDE_FROM_WORLD
variable for additional information.
Expose the sstate-cache
directory
produced by the build.
Typically, you expose this directory over HTTP or HTTPS.
Set the appropriate configuration so that the produced SDK
knows how to find the configuration.
The variable you need to set is
SSTATE_MIRRORS
:
SSTATE_MIRRORS = "file://.* http://example
.com/some_path
/sstate-cache/PATH"
You can set the SSTATE_MIRRORS
variable
in two different places:
If the mirror value you are setting is appropriate to
be set for both the OpenEmbedded build system that is
actually building the SDK and the SDK itself (i.e. the
mirror is accessible in both places or it will fail
quickly on the OpenEmbedded build system side, and its
contents will not interfere with the build), then you
can set the variable in your
local.conf
or custom distro
configuration file.
You can then "whitelist" the variable through
to the SDK by adding the following:
SDK_LOCAL_CONF_WHITELIST = "SSTATE_MIRRORS"
Alternatively, if you just want to set the
SSTATE_MIRRORS
variable's value
for the SDK alone, create a
conf/sdk-extra.conf
either in
your
Build Directory
or within any layer and put your
SSTATE_MIRRORS
setting within
that file.
SSTATE_MIRRORS
.
By default, the extensible SDK bundles the shared state artifacts for
everything needed to reconstruct the image for which the SDK was built.
This bundling can lead to an SDK installer file that is a Gigabyte or
more in size.
If the size of this file causes a problem, you can build an SDK that
has just enough in it to install and provide access to the
devtool command
by setting the following in your
configuration:
SDK_EXT_TYPE = "minimal"
Setting
SDK_EXT_TYPE
to "minimal" produces an SDK installer that is around 35 Mbytes in
size, which downloads and installs quickly.
You need to realize, though, that the minimal installer does not
install any libraries or tools out of the box.
These must be installed either "on the fly" or through actions you
perform using devtool
or explicitly with the
devtool sdk-install
command.
In most cases, when building a minimal SDK you will need to also enable
bringing in the information on a wider range of packages produced by
the system.
This is particularly true so that devtool add
is able to effectively map dependencies it discovers in a source tree
to the appropriate recipes.
Also so that the devtool search
command
is able to return useful results.
To facilitate this wider range of information, you would additionally set the following:
SDK_INCLUDE_PKGDATA = "1"
See the
SDK_INCLUDE_PKGDATA
variable for additional information.
Setting the SDK_INCLUDE_PKGDATA
variable as
shown causes the "world" target to be built so that information
for all of the recipes included within it are available.
Having these recipes available increases build time significantly and
increases the size of the SDK installer by 30-80 Mbytes depending on
how many recipes are included in your configuration.
You can use
EXCLUDE_FROM_WORLD_pn-
recipename
for recipes you want to exclude.
However, it is assumed that you would need to be building the "world"
target if you want to provide additional items to the SDK.
Consequently, building for "world" should not represent undue
overhead in most cases.
SDK_EXT_TYPE
to "minimal",
then providing a shared state mirror is mandatory so that items
can be installed as needed.
See the
"Providing Additional Installable Extensible SDK Content"
section for more information.