I am using the AVR microcontroller as a simple image processor on a project that i am working on and I would like to know how to convert an image like .jpg to binary. My intention is to send the binary image to the microcontroller buffer the image in an array and then do some image processing on it and return. If anyone has any suggestions it would be much appreciated.
Posted on 2013-02-12 17:50:08 by Snake4eva
JPG is a very complicated compression algorithm. I guess your best bet is to look for a library that is already adapted to the microcontroller you use.
Else you may want to try your hand at adapting one of the existing open source JPG libraries to run on your system.
Posted on 2013-02-13 04:08:36 by Scali
I don't do much graphics programming, but I seriously doubt you are going to be able to support JPEG on the 8-bit AVR. Biggest issue is going to be the lack of memory on-chip to actually do something as complex as JPEG decoding. You *could* use some of the digital ports to connect to external RAM, but then you're limited in speed.

If you are really serious about doing an image processor then you should either switch to BMP (or any other relatively easy to deal with format) or use a larger architecture (like ARM, AVR32 or PIC32).

Now if you were actually meaning the AVR32 rather than AVR, then Atmel already has a JPEG library available that you can use.

http://asf.atmel.com/docs/latest/avr32.services.graphics.ijg.example.evk1105/html/index.html
Posted on 2013-02-13 15:00:35 by Synfire
Thanks for your help. There is something i would like to know about image representations inside memory. I understand that an image is a function I(u,v) where u,v is the position of the image and I is the intensity or colour value of the pixel. What i don't understand is how it is encoded in say a jpeg or bmp file. I know that there are various colour depth or bits reserved for each of the RGB elements but where does the position go. Can someone please explain the format in which an image is stored both including its intensity and its position in your explanation. Also can i do image processing on AVR32 with my own user defined processing functions and how are matrices and floating point numbers used in images/videos?
Posted on 2013-02-18 09:23:22 by Snake4eva
Generally you don't use separate intensity and colour values. The two most common ways to represent colours is by just storing RGB values directly in memory, or by using indexed pixels, where each pixel in memory is an index in a palette table which stores the RGB values.
Modern hardware nearly always uses RGB, either 16-bit or 32-bit, in a linear framebuffer.
For example, in 32-bit, you have 8-bit values each for R, G and B, and another spare 8 bits to align the data (which is sometimes used as 8-bit alpha). So in memory your pixels will look like this:

xRGBxRGBxRGB... (each letter representing a byte)

The pixels are generally stored from left to right, and from top to bottom.

In the case of indexed pixels, older hardware will often use bitplanes. Say if you have a 4-bit palette, then you will have 4 bitplanes. Bitplane 0 will store bit 0 of each pixel, bitplane 1 will store bit 1 of each pixel, etc.
In each bitplane, each bit represents a single pixel.
The hardware will combine the bitplanes into a 4-bit value, then use it as an index into the palette to find the RGB value.

In BMP, the pixel data is stored pretty much the same as it appears in memory (although there are variations with RLE compression and such).
How JPG stores the pixel info is very complicated. It performs a number of transformations on the pixel data to make it compress better. Firstly it converts from RGB to YUV colourspace. Then it stores YUV in 4:1:1 format (so for every 4 pixels of Y info, there is only 1 pixel of U and V stored), in blocks of 8x8 pixels. Then it performs a discrete cosine transform, then it does a zig-zag reorder, and finally it quantizes the resulting coefficients. This leads to data with a lot of zero coefficients. The non-zero coefficients are stored with Huffman compression, and the zeros are stored with a run length value.

So basically the resulting info in a JPG doesn't even remotely resemble any image data.

Can you do image processing on AVR32? Well, it's Turing Complete, so sure you can. Will it have enough processing power and memory to make it useful? I have no idea, depends on what you plan to do.

Matrices and floating point are generally not used in images and video, except when you want high dynamic range, or other fancy stuff (some parts of JPG/MPG compression/decompression involve some matrix-related stuff, but it can be performed with integer-only math).
Posted on 2013-02-19 05:26:23 by Scali
Thanks Scali. What I want to do with the AVR32 is write an object processor that accepts image samples from some external image capture device and analyze the image to detect a number of stored objects. The objects will be stored in an external DIP DRAM memory chip. Also I was messing around with a .jpg image in debug an I realized as you mentioned that there seems to be no direct correlation between the bytes stored in memory and the pixel values. What I would like to know is how image editing software like adobe photoshop are able to manipulate pixel and is there someway to know where piixel data is seperately from metadata? **Currently reading JFIF standard**
Posted on 2013-02-19 17:47:12 by Snake4eva
How does an image viewer or editor work? This is how i think it works
1. Read image file
2. Extract data based on image format
3. Do operation on pixel data byte by byte
4. Interface between graphics card and display unit
5. Output to display

I would like to know the programming details or an algorithmic detail of how the image editors work.
Posted on 2013-02-19 18:49:52 by Snake4eva