Skip to content

Encoding h.264 for iOS with main and high profile

In this post, the focus is on encoding video for iOS devices with the h.264 main and high profiles. Typically, an video encoded for iOS would target the baseline profile, since the baseline profile makes it possible to run a video on old iPhone 3 devices as well as newer iPhone 4/5 and iPad devices. But, the baseline profile has some drawbacks that a developer should consider.


Please be aware that this blog page is now out of date, please click on encoding_h264_for_ios_with_main_and_high_profile to access the updated blog page.

First, the baseline profile typically generates a larger file than the main or high profile. This is because the main and high profiles include advanced encode time analysis and CABAC compression approaches. If the h.264 videos will be included in the app resources of your iOS app, then every bit of space savings helps. Here are the byte size results from a very simple animation video that will be examined in this post:

36960 HomerSanta_baseline.m4v
34412 HomerSanta_main.m4v
34391 HomerSanta_high.m4v
40386 HomerSanta_high422.m4v
32034 HomerSanta_high444.m4v

The first thing to notice about these results is that the main profile output is smaller than the baseline output. For this example, the difference is only about a 7% savings when using the main profile. For a longer and more complex video the space savings will be larger. There is not much space savings in this example moving from main to high profile, the CABAC compression enabled by the main profile is likely the majority of the file size improvement.

The second thing to notice about the output files is that using the high profile and YUV 4:2:2 pixels results is a larger file as compared to baseline/main/high. This is expected since the baseline/main/high profiles use YUV 4:2:0 pixels that contain less information than 4:2:2 pixels would. See this on wikipedia for detailed information about YUV pixel formats. The last interesting thing about the output file sizes is that the high YUV 4:4:4 format actually compresses the best even though there is more information in each pixel than the 4:2:2 and 4:2:0 formats.

While file size is an important factor the actual colors that appear in the output file can be a major issue as well. For certain applications, the exact colors that appear on screen can be very important. A developer can read more about Color Management at wikipedia. The basic issue with colors when encoding h.264 is taking care to ensure that the colors that appear on the iOS device screen closely match the colors the video or graphic artist intended. This topic is complex and difficult and will be ignored for the remainder of this post. In addition, no current iOS hardware seems to support playback of h.264 content with YUV 4:2:2 or 4:4:4 pixels.

Source Video:

A test video consisting of a single image at 640 x 480 was created as a example. The video displays this one animated frame for 2 seconds. This video content was selected because it is animated content with large regions that are either exactly the same color or colors that are very close to each other.


Note that these links are disabled, click on the updated page link at the top of this page to get to the updated content with working video links.

The original 640×480 lossless Quicktime movie encoded with the Apple Animation codec can be downloaded here:

  • Zipped (163 Kb)

The h.264 encoded versions:

  • HomerSanta_baseline.m4v
  • HomerSanta_main.m4v
  • HomerSanta_high.m4v
  • HomerSanta_high422.m4v
  • HomerSanta_high444.m4v

While h.264 encoding is very good, the image data is degraded a bit by the encoding process. Edges are blurred, this is hard to see so here is a zoomed in animation that shows the lines on homer’s hat. The “Lossless” image is the original PNG and the “Baseline” image shows the same region after encoding with the baseline profile.


Encoding h.264 with x264:

These videos were encoded with x264 via ffmpeg. While there are other encoders around, the combination of ffmpeg+x264 is the best available. It produces the smallest files with the best quality of those I have tested. In addition, both ffmpeg and x264 are free software. But, it can be very difficult to actually find the correct command line arguments to actually create h.264 videos. The following ffmpeg command lines were use to create the h.264 encoded videos. You must use a recent version of ffmpeg and x264, old versions may not work.

ffmpeg -y -i -c:v libx264
  -pix_fmt yuv420p -preset:v slow
  -profile:v baseline -tune:v animation -crf 23

ffmpeg -y -i -c:v libx264
  -pix_fmt yuv420p -preset:v slow
  -profile:v main -tune:v animation -crf 23

ffmpeg -y -i -c:v libx264
  -pix_fmt yuv420p -preset:v slow
  -profile:v high -tune:v animation -crf 23

ffmpeg -y -i -c:v libx264
  -pix_fmt yuv422p -preset:v slow
  -profile:v high422 -tune:v animation -crf 23

ffmpeg -y -i -c:v libx264
  -pix_fmt yuv444p -preset:v slow
  -profile:v high444 -tune:v animation -crf 23

iOS hardware playback

The baseline h.264 video will play on old iPhone 3 devices as well as newer iPhone 4/5 models and all iPads.

The main profile video should play on iPhone 4/5 models and on all iPad devices.

The high profile video should play on iPhone 4S/5 models and on iPad 2 and newer models.

The high422 and high444 videos do not currently play on any known iOS hardware. Some applications that have strict color requirements could benefit from support for these h.264 profiles, but until iOS hardware supports decoding these h.264 formats it is a mute point. For apps with specific color needs, lossless video support from a library like AVAnimator will be required.

Given that the main profile provides a file size advantage and works on all iPhone and iPad devices except for the very old iPhone 3 and 3G models, a developer should consider using the main profile for video content that will be embedded inside an iPhone app. An iPad app should use main as opposed to baseline. If the high profile offers an advantage in terms of video quality or file size then it could be considered for a new iPad app since only the iPad 1 would be excluded when encoding to the high profile.

Seamless video looping on iOS

A difficult problem that developers often run into with iOS is how to loop videos. Using AVPlayer does not work. Well, it sort of works, but the results you end up with do not look professional. Under iOS, hardware is used to render h.264 streams and that works well for most videos. But, short videos do not work well because of lag when starting the video. A developer will run into frustrating glitches attempting to loop a video and to switch from one short video to another. AVPlayer just was not designed for this.


Be aware that this blog page is now out of date, please click on seamless_video_looping_on_ios to access the updated blog page.

Here are a couple of stackoverflow examples of this sort of question:


Since this kind of animation problem comes up over and over again, this post will provide a concrete example of seamless looping using the AVAnimator library. A lot of time was spent on this library specifically to solve the problem of looping video clips and starting playback at exactly the right moment. In this example, these three video clips will be combined into very simple demo Xcode project.





As you can guess from the third animation, a bug will get zapped. The implementation will switch from the plain bug animation cycle to the zap loop at certain times. I could have implemented things with a more complex zap overlay layer that sits over the existing bug animation, but this example will just keep things as simple as possible and use 2 bug animations. Both bug animations will be 32BPP videos with an alpha channel.

The background radar animation loop is an animated GIF found online. The GIF can be attached to the Xcode project file as a normal file and decoded with existing GIF support in AVAnimator. In addition to the 2 bug animation videos, I also added a CoreAnimation scale CABasicAnimation to make the bug jump over the radar line. The results look like this:




Of course, the animations look a lot better when run on the simulator or on a device. This example is configured as either an iPhone or iPad project and the Xcode project can be downloaded at Source Code.

All of the animation logic can be found in the ViewController.m file. The key to understanding why this implementation is able to seamlessly loop the radar animation and quickly switch between bug animations is because AVAnimator will decode animation data and store the video pixels in an optimal format on disk. When switching between the bug walk cycle animation and the zap cycle animation the library need only read from the start of the cached video data in a file. This means that a developer can switch from any video to any other video very quickly. This example only switches between two videos, but it would work equally well if there were 10 or 50 videos to switch between. Seamless looping of the background radar video is the same, simply begin reading from the start of the same cached video file at the end of the loop. All of this logic is already implemented by AVAnimator, see the source code for example code showing how easy it is to setup animations and callbacks. Of course, AVAnimator does all this while consuming minimal memory.

This example does not include sound. If you are interested in an example that also includes sound clips, see this StreetFighter 2 example project.

Load OpenGL textures with alpha channel on iOS

Hello OpenGL Hackers

So, you have been using OpenGL and you know the difference between a Viewport and a Frustum. Great! But now consider that you have run into a problem that is not so easy to solve. How does one send multiple textures from a movie to OpenGL? That is a hard enough problem, but now make it even harder by adding a requirement that the textures include an alpha channel. The texture could be anything, but for the purposes of this example a 64×64 goldfish animation like this will do just fine.



Be aware that this blog page is now out of date, please click on load_opengl_textures_with_alpha_channel_on_ios to access the updated blog page.


This animation contains 20 frames showing a goldfish swimming. This post will show how to include the animation in an iOS project as a source of OpenGL textures. A texture is basically the same as a 2D image except that the texture gets mapped into 3D space by OpenGL. Instead of starting from scratch, let’s use the code from OpenGL ES 2.0 for iPhone Tutorial Part 2 by Ray Wenderlich. The existing code displays a still image of a fish on the side of a spinning cube. The existing code is a great little demo and it will be even more interesting once the swimming fish shown above is added to the project.



Now for the implementation. The first thing one might think of is simply including a series of PNG images in the iOS project. It is not so hard to do, but this simple approach wastes a lot of space. If each PNG image in this animation is stored in a zip file, that file is 198118 bytes or 198Kb. That is not huge, but it is not hard to do a lot better.

With the AVAnimator library for iOS, the total size of the animation can be compressed down to 121965 byte or 121Kb. The space savings is possible because AVAnimator includes code that is able to decompress the image data with 7zip as opposed to the less effective zlib compression used by plain PNG images. The result is not quite half the size, but it is a significant reduction in file size and that means the final app will download more quickly for the end user.

In addition to app size, AVAnimator is able to decompress multiple images much more efficiently than would be possible when decompressing a series of PNG files. In this example, only a single movie with 20 frames will be decompressed, so CPU time used on the device will not be critical. But, if a developer wanted to decode 2, 4, or 8 videos at the same time then execution time on the iOS device would become a real issue.

Okay okay, enough talk. Lets seem some results!



The image above is a screenshot from an iPhone running the demo with the addition of the goldfish animation. Of course, you will need to actually download the source code and run it yourself to see how nice the goldfish texture looks animating on the side of the cube.

The most interesting code is in OpenGLView.m, see the method named “render”, it is the CADisplayLink callback that is invoked once for each rendered frame. The very first display link call cannot actually render the fish, since the media still needs to be decoded and prepared to render. Once the fish animation is loaded, it will be pushed into OpenGL via the following code in the render method:

if (self->_frameDecoder) {
  // Texture frames are ready to display now
  [self loadNextGoldfishTexture];
  glBindTexture(GL_TEXTURE_2D, _fishTexture);
  glUniform1i(_textureUniform, 0);

See the implementation of the loadNextGoldfishTexture method for all the details of how to extract texture frames from the animation file. The most interesting pieces of code in loadNextGoldfishTexture are:

if (self->_goldfishFrame > 0) {
  GLuint texName = self->_fishTexture;
  glDeleteTextures(1, &texName);
AVFrame *frame = [_frameDecoder advanceToFrame:boundedOffset];
uint32_t *pixels = (uint32_t*) cgFramebuffer.pixels;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height,

Each time a fish animation frame is loaded into a texture on the graphics card, the previous one needs to be deallocated. The code then advances to the next frame and gets a pointer to the first word in the next framebuffer. This pointer “pixels” is then passed to the glTexImage2D() and that API will copy the framebuffer to graphics memory. Note the use of GL_BGRA_EXT, this is an apple specific extension that makes using BGRA little endian texture data more efficient.

Many thanks go to Ray Wenderlich for providing such a nice compact OpenGL ES 2 demo. The fish animation comes from a couple of codeproject demos A-lovely-goldfish and Fishy-Fishy-Fish.

New animated GIF decoder for AVAnimator

The AVAnimator library for iOS makes it easy to implement non-trivial video in an iPhone or iPad app. Supported video formats include APNG, MOV (h.264), and the custom MVID file format built specifically for iOS. But, one older format that is not supported directly is an animated GIF, like this one:



Be aware that this blog page is now out of date, please click on new_animated_gif_decoder_for_avanimator to access the updated blog page.

The animated GIF format is quite limited in that is supports only a palette of 256 colors and a 1 bit transparency. But, quite a few animated GIFs can be found online and a developer may want to include this type of media in an iOS app. For example, in a forum type of application the user could upload an avatar using an animated GIF.

Support for animated GIFs has now been added to AVAnimator via a new loader named AVGIF89A2MvidResourceLoader. The code will be included in the next release, is currently available on github:

A motivated developer could grab the new loader .h and .m file and include it in an existing project that already includes AVAnimator.

The loader module will read a GIF 89a file using the ImageIO Framework provided by iOS. A previous post covers memory issues under iOS and the mistakes made in the other iOS GIF decoders found online. This GIF decoder implementation will not use up all app memory and crash your app when presented with a large GIF.

The loader creates a secondary thread to decode a GIF from the app resources or from a regular file. The loader object will write a new .mvid file that contains either 24BPP or 32BPP pixels depending on if a transparent pixel was used in the GIF.

The Objective-C code needed to create this type of loader works the same way as any other AVAnimator media loader object:

// Create loader that will load .mvid from .gif attached as resource

NSString *resFilename = @"superwalk.gif";
NSString *tmpFilename = @"superwalk.mvid";

AVGIF89A2MvidResourceLoader *resLoader =
  [AVGIF89A2MvidResourceLoader aVGIF89A2MvidResourceLoader];

NSString *tmpPath = [AVFileUtil getTmpDirPath:tmpFilename];
resLoader.movieFilename = resFilename;
resLoader.outPath = tmpPath;

// Create Media object

AVAnimatorMedia *media = [AVAnimatorMedia aVAnimatorMedia];

media.resourceLoader = resLoader;
media.frameDecoder = [AVMvidFrameDecoder aVMvidFrameDecoder];


That is all there is to it. With this new loader module, an animated GIF can now be used as a source of video content in AVAnimator. The high performance video blit and advanced memory management logic in AVAnimator will be used with video content defined as an animated GIF.

Video and Memory usage on iOS devices


Today’s post is all about video and memory usage on iOS devices. In previous posts, colors and the way a color is represented as a pixel was covered. This post will focus on how video is represented in memory and how much memory is required to hold all the data contained in a video. This is an important detail that a developer must understand when considering possible implementations. Video takes up a LOT of memory, so much in fact that it can be a little hard to believe at first. The next example will make memory usage a little more clear by providing actual file sizes in bytes.



Be aware that this blog page is now out of date, please click on video_and_memory_usage_on_ios_devices to access the updated blog page.

The animated GIF above is a web friendly version of the original video with dimensions 480 x 320 at 24 bits per pixel. These dimensions match the screen size of the original iPhone display in landscape orientation. This video is a small clip made up of 48 frames or images shown in a loop. The way video works is that one image or frame after another is displayed on the screen. As long as the video is displayed quickly, it looks like the video is a smooth movement instead of a series of images.  If the video was viewed instead as a series of images on a filmstrip, it might look like this:


A film projector displays a filmstrip by shining light through each frame at a certain framerate. Digital video is not too conceptually different, except that each frame is contained in a file and the frames of video are displayed one after another on the screen. In digital video terms, to blit a video frame is to display it on screen at exactly the right moment so that the viewer sees smooth motion instead of a series of frames.

Conceptually, this all sounds pretty easy. It is only when a developer sits down to write code to implement this type of video playback that all the problems start to become clear. The first problem is the shocking amount of memory that uncompressed video takes up.

In the example above, the video clip has dimensions 480 x 320 and each pixel is stored as 24 bits per pixel. The video is displayed at 15 FPS (frames per second) so the whole series of 41 frames is displayed for about 3 seconds. A quick calculation shows how many bytes that is when each pixel is represented by a 32 bit word.

32 bits -> 4 bytes per pixel
480 * 320 -> 153600 pixels
153600 * 4 -> 614400 bytes per frame
25190400 bytes for 41 frames

So, this quick calculation shows that each frame of video takes up about 600 kB, or a bit more than 1/2 a megabyte. When all the frames are considered together, a file that contains all the video frames as raw pixels would be roughly 25 megabytes. That is a really really big file and this is a very simple animation that is only 3 seconds long. A clip that is 10 seconds long would require something like 75 megabytes of memory. Yikes!

The astute reader will no doubt wonder how video compression plays into all this. After all video can be compressed down to a much smaller size by doing frame to frame deltas and other types of data compression. Yes, it is true that the size of the file written to disk can be reduced by various compression methods, but the focus of this post is memory usage of the uncompressed video. Once compressed video has been uncompressed into app memory, it takes up memory. So, to keep things simple the complexity of video compression can be ignored when looking at the memory usage required when dealing with the uncompressed video data.

To understand how a developer ends up running into this type of problem in iOS, have a look at these two stackoverflow posts:

The basic problem illustrated with the above posts is that the developer is lacking an understanding of exactly how much memory the uncompressed video memory is going to take up at runtime on the actual device. As a result, code might seem to work for very small examples or in the Simulator. But, once poor code is put into practice the result can be app crashes when run on the device. Under iOS, the system will notice when an app that is taking up too much memory and automatically kill it in certain cases.

So, the first thing to do is make sure that any code dealing with video does not load all the video frames into memory at the same time. For this reason, developers should simply avoid ever using the UIImageView animationImages API under iOS. Any code that allocates a UIImage or CGImageRef for each frame of a video will end up crashing on the device in certain cases. For example, a video with large dimensions or a longer duration will crash while a smaller video might not.

While it may seem obvious that a developer should not use up all app memory on an embedded device like iOS, many developers simply do not understand this basic memory usage issue when dealing with images and video. For example, here are a few projects that can be found online that contain this most basic design flaw (all frames being loaded into memory at once):

It is tempting to look around online for existing code and copy and paste what seem to be easy solutions. But this type of Cargo cult programming has serious implications. When dealing with complex topics like audio and video, it is better to use an existing solution that already deals with the complexity.

Fully solving this memory usage problem was the initial reason I created AVAnimator. The AVAnimator library makes use of memory mapped files to implement a highly efficient means of loading into memory just those video frames that are actually being used at any one time. AVAnimator also includes an exceptionally fast blit implemented in ARM asm that makes it possible to efficiently load delta frames and apply frame deltas in the most efficient way possible under iOS. A developer might also be interested in this simplified PNG animation example code if AVAnimator seems too complex at first.

h.264 video with an alpha channel


Today’s post is about h.264 video and how an iOS developer can incorporate h.264 video with an alpha channel using AVAnimator. In short, h.264 is a lossy way to encode video, see this wikipedia page for detailed info. With h.264, some very impressive file size reduction is possible. But, not everything about h.264 is perfect. One major problem with h.264 is the lack of alpha channel support.


Be aware that this blog page is now out of date, please click on h_264_video_with_an_alpha_channel to access the updated blog page.

Before going into more detail, it is important to clear up misinformation that one frequently comes across online. First, some people just do not seem to understand what an alpha channel is, as seen in some of the responses to
this stackoverflow question. A previous post about RGBA pixels shows visual examples of what an alpha channel is and how it is implemented. Second, there is information floating around about how h.264 could implement an alpha channel (called frex extensions), but this is not actually useful because there is no current encoder/decoder that supports an alpha channel. Third, there is no other video format available by default under iOS that supports an alpha channel. What is available under iOS is a hardware based h.264 encoder/decoder that supports opaque video without an alpha channel.

What is presented here is an approach that makes use of the existing hardware decoder on iOS devices while also reducing file sizes as much as possible without unacceptable loss of quality. The developer will need to determine how much loss of quality is reasonable given the space savings associated with a specific compression setting.

First, the final result will be shown and then the elements that make up the solution are explained one at a time. The Kitty Boom example app works as either an iPhone or iPad app. The example app shows a simple background beach image with an animated image (originally an animated GIF) of Hello Kitty skipping down the beach. After a few steps, the adorable little kitty steps on a land mine and is blown to bits.


The Hello Kitty animation loop comes from the following animated GIF:


The really interesting part of this example is the video of the explosion.


One can easily find this sort of stock explosion footage shot against a green screen online. Where things get interesting is when considering the file sizes. The Kitty GIF image is 19K. The explosion movie encoded as lossless 30 FPS video is about 28 megs uncompressed or about 8.5 megs once compressed with 7zip. Including an 8.5 meg video for this explosion is just not a viable option.

Asking users to download an iOS app that contains videos that are 8.5 megs each is just asking for lost app store sales. To save space and end user download time, the explosion video can be converted to a pair of h.264 videos and then both videos can be compressed down to a reasonable size. A command line script is provided with AVAnimator to implement the channel split logic. One video will contain the RGB components while a second video will contain a black and white representation of the alpha channel, as shown here:


With AVAnimator, this conversion process is implemented as a command line script named Assuming a video had previously been exported to a series of PNG images, one would first encode the images to an MVID file at 30 frames per second like so:


$ mvidmoviemaker Explosion0001.png Explosion.mvid -fps 30
writing 152 frames to Explosion.mvid
MVID:               Explosion.mvid
Version:            1
Width:              640
Height:             480
BitsPerPixel:       32
ColorSpace:         sRGB
Duration:           5.0667s
FrameDuration:      0.0333s
FPS:                30.0000
Frames:             152
AllKeyFrames:       FALSE

Now the MVID file can be split into RGB and ALPHA components and encoded using the script. This script invokes ffmpeg and x264 to implement encoding of the h.264 video (both executables are provided with the AVAnimator utils download). By default, the CRF encoding would be set to 23 but a specific value can be passed as the third argument to the script. See x264EncodingGuide for more info, but generally experience shows that values in the range 20 to 35 are useful. The higher the CRF value, the more the video data is compressed in a lossy fashion.

$ Explosion.mvid 30
Split Explosion.mvid RGB+A as Explosion_rgb.mvid and Explosion_alpha.mvid
Wrote Explosion_rgb.mvid
Wrote Explosion_alpha.mvid

After all the encoding steps have been executed, there is a new directory named MVID_ENCODE_CRF_30. This directory contains all the generated files. The ones of interest are the .m4v files, in this case Explosion_rgb_CRF_30_24BPP.m4v and Explosion_alpha_CRF_30_24BPP.m4v.

$ ls -la *.m4v
-rw-r--r--  1  79047 16:18 Explosion_alpha_CRF_30_24BPP.m4v
-rw-r--r--  1  72778 16:18 Explosion_rgb_CRF_30_24BPP.m4v

The output show some very impressive compression results. Instead of a 30 meg or 8 meg video, this process results in a pair of videos that are under 1 meg, combined the two videos take up about 1.5 megs of disk space. This is a significant space savings. Download the Kitty Boom example Xcode project to see the source code needed to load these two videos on an iOS device.

That is really all there is to it. The tricky code needed to read h.264 videos at runtime and combine them back together on a iPhone or iPad device is included in the AVAnimator library. The most complex aspects of conversion and encoding with the x264 encoder are all handled on the desktop.

Pixel binary layout w premultiplied alpha


In previous posts, RGB pixels and RGBA pixels were covered. In this post, premultiplication of the RGB components will be covered.


Be aware that this blog page is now out of date, please click on pixel_binary_layout_w_premultiplied_alpha to access the updated blog page.

A premultiplied pixel has an algorithmic advantage as compared to a non-premultiplied pixel. Specifically, Alpha Compositing is a method of combining a foreground image with a background that is made significantly more simple when the input pixels are already premultiplied. For a detailed description of the math involved, see this article or this wikipedia page. This post assumes premultiplication is required because the CoreGraphics layer in iOS and MacOSX supports only premultiplied alpha components. On iOS, the native endian is little endian and the preferred pixel layout is known as BGRA.

The following source code shows how one could implement conversion of plain RGBA values to premultiplied RGBA values. The pixels will be stored as a 32 bit unsigned integer. When a pixel is fully opaque, the alpha channel is 0xFF and the RGB values remain the same. When a pixel is fully transparent, the premultiplication results in the value zero for each component. The pixel values get more interesting when the alpha component is in the range 1 to 254. Converting the 3 RGB channels means 3 floating point multiply operations and another to calculate (alpha / 255.0).

#include <stdint.h> // for uint32_t
#include <stdio.h> // for printf()

uint32_t rgba_to_pixel(uint8_t red, uint8_t green,
                       uint8_t blue, uint8_t alpha)
  if (alpha == 0) {
    // Any pixel that is fully transparent can be represented by zero
    return 0;
  } else if (alpha == 0xFF) {
    // Any pixel that is fully opaque need not be multiplied by 1.0
  } else {
    float alphaf = alpha / 255.0;
    red = (int) (red * alphaf + 0.5);
    green = (int) (green * alphaf + 0.5);
    blue = (int) (blue * alphaf + 0.5);

  return (alpha << 24) | (red << 16) | (green << 8) | blue;

void print_pixel_rgba(char *desc, uint32_t pixel)
  uint32_t alpha = (pixel >> 24) & 0xFF;
  uint32_t red = (pixel >> 16) & 0xFF;
  uint32_t green = (pixel >> 8) & 0xFF;
  uint32_t blue = (pixel >> 0) & 0xFF;

  printf("%10s pixel 0x%.8X : (R G B A) (%d, %d, %d, %d)\n",
         desc, pixel, red, green, blue, alpha);

int main(int argc, char **argv)
  uint32_t red = rgba_to_pixel(255, 0, 0, 255);
  print_pixel_rgba("red", red);

  uint32_t green = rgba_to_pixel(0, 255, 0, 127);
  print_pixel_rgba("50% green", green);

  uint32_t black = rgba_to_pixel(0, 0, 0, 127);
  print_pixel_rgba("50% black", black);

  uint32_t white_transparent = rgba_to_pixel(255, 255, 255, 0);
  print_pixel_rgba("0% white", white_transparent);

  uint32_t white = rgba_to_pixel(255, 255, 255, 191);
  print_pixel_rgba("75% white", white);

  return 0;

Compile the source with gcc like so:

$ gcc -o encode_decode_pixels_prergba encode_decode_pixels_prergba.c

$ ./encode_decode_pixels_prergba
       red pixel 0xFFFF0000 : (R G B A) (255, 0, 0, 255)
 50% green pixel 0x7F007F00 : (R G B A) (0, 127, 0, 127)
 50% black pixel 0x7F000000 : (R G B A) (0, 0, 0, 127)
  0% white pixel 0x00000000 : (R G B A) (0, 0, 0, 0)
 75% white pixel 0xBFBFBFBF : (R G B A) (191, 191, 191, 191)

Note how the green pixel (0, 255, 0, 127) becomes (0, 127, 0, 127) since the green component is equal to (255 * (127/255)). Note also that when the alpha component is zero all three color components will be zero, as seen in the 0% white pixel example. A developer would need to implement this type of conversion when creating code that reads pixels from an image file.

In actual code, 3 floating point multiply operations for every pixel would be far too slow. A more optimal implementation could avoid the multiply and divide operations by using a table that does 3 table lookups for each pixel.

This post covered the logic needed to implement a manual conversion of a non-premultiplied pixel into a premultiplied pixel for use with a graphics layer that requires alpha components that are already premultiplied.