Binary code serves as the fundamental language of computers, and developers use it everywhere—in the text you're reading and on the screen you’re looking at. It is an integral part of our digital lives. But how does a code made up of just 0s and 1s create an image? You're in the right place to find out!
At its core, binary consists of 0s and 1s. Each bit can be either a 0 or a 1, and a byte is made up of 8 bits. Because of this, there are 256 possible combinations of 0s and 1s in a byte, calculated by the formula 2^8. This raises the question: how can this sequence of numbers form an image that we can recognize? But first, let’s explore what an image actually is.
What is an image?
An image is just pixels, your graphics card decides how many bits are on one pixel using the bit-depth setting, a bit depth of 8 means that 255 different colors can be represented in that pixel, and so on.
 |
GrayScale image example using 1-byte or 255 color values |
This is an example of what's going on, as you can see, the picture is very unclear and that's because it's just 12x16 pixels with a depth of 8. Here is another image to show you how different an 8-bit to a 16-bit can be... we're talking about a difference of almost 63000 other colors, mind-blowing isn't it? We'll continue with the 8-pixel just to keep it simple, you don't wanna see a 5-figure number on a pixel. As a reminder: 8bit-depth means that there is 1 byte in each pixel.
 |
Contrast between 8-bit and 16-bit colors |
That's gotta be all about it to fundamentally understand what an image consists of, but how is it presented?
How is the image presented?
The image, let us say you have an image of resolution 128x128, Which is 128 pixels multiplied by another 128, which is a square, your graphics processor will ask the image, how do I draw you? The image will respond with the bit depth and the byte for each pixel in the 128x128 You can think of it as pixel grid.
What is image resolution:
Resolution is the number of pixels in an image, for example, an image of 128x128 has 16,384 pixels, meaning it has a fixed resolution, unlike a
vector that has dynamic resolution no matter how you scale it.
Here is where the color code comes in:
- RGB (Red, Green, Blue): Each pixel has values for red, green, and blue (e.g.,
255, 0, 0
For red). - Grayscale: since the grayscale is 1 byte, A grayscale image with the same resolution is one-third the size of an RGB image.
- RGBA: Same as RGB, but with an extra alpha value for transparency.
- Other formats like CMYK C for cyan, M for magnate, Y for Yellow, K for Black (used in printing), or grayscale (for black-and-white images).
 |
Example of 1-byte colors |
There are also image file types called bitmap
(Raster Graphics) that use bits to represent the colors, unlike vectors, that use math like SVG and EPS:
JPEG (Joint Photographic Experts Group)
- Raster, compressed (lossy).
- Supports 24-bit RGB color (16.7 million colors).
- No transparency support.
- Good for photos with smooth color gradients.
PNG (Portable Network Graphics)
- Raster, compressed (lossless).
- Supports 8-bit grayscale or 24-bit RGB.
- Transparency (alpha channel) supported in 32-bit PNG.
- Good for graphics, icons, and images needing transparency.
BMP (Bitmap)
- Raster, uncompressed, or minimally compressed.
- Supports various bit depths: 1-bit, 8-bit, 24-bit.
- Large file sizes; rarely used now.
GIF (Graphics Interchange Format)
- Raster, compressed (lossless).
- 8-bit color palette (256 colors).
- Supports simple animations and transparency.
TIFF (Tagged Image File Format)
- Raste supports multiple-bit depths.
- Often used in high-quality printing and archiving.
- Lossless compression or uncompressed.
SVG (Scalable Vector Graphics)
- Vector format, not pixel-based.
- Can embed raster images (e.g., PNG, JPEG) if needed.
- Scalable without losing quality.
You might be wondering why we refer to only 255 colors when RGB uses 3 bytes. In reality, developers create RGB using 3 bytes, totaling 24 bits, which allows for nearly 2^24 or 16,777,216 possible colors.
Now, let’s recap: the GPU requests color details and dimensions from an image, and the image provides information such as 128x128. This means the image has a grid of 128 rows and 128 columns. Images store pixel data in a file and the GPU reads and processes this data. Each grid spot (or pixel) contains a color defined by 3 bytes.
To clarify this further, let's consider a much smaller example: a 2x2 image.
2 x 2 pixels data:
- row 1 column 1 (pixel 1): RGB(0,0,0)
- row 1 column 2 (pixel 2): RGB(255,0,0)
- row 2 column 1 (pixel 3): RGB(0,255,0)
- row 2 column 2 (pixel 4): RGB(0,0,255)
these particular RGB values are arbitrary.
What is a bit-depth?
The bit-depth, in a complex level, refers to the amount of data stored in each pixel. Essentially, the higher the bit-depth, the larger the file size, but why is that the case? Let's take an image size of 1280x720 as an example, using a 32-bit depth.
So, what does 32-bit depth mean? At 32 bits, which corresponds to 4 bytes, we have a grid array of 1280x720 pixels. Each pixel consists of 32 bits, which can be interpreted as a sequence of zeros and ones. To calculate the total number of bits for this image, we multiply the width by the height and then by the bit depth:
1280 pixels x 720 pixels x 32 bits = 29,491,200 bits.
Since there are 8 bits in a byte, we convert this to bytes by dividing the total bits by 8. This results in:
29,491,200 bits ÷ 8 = 3,686,400 bytes, or approximately 3.68 MB.
Now, just imagine how different the file size would be if we had only used standard RGB or grayscale instead of RGBA.
It's worth noting that the 4 bytes we refer to typically correspond to RGBA (Red, Green, Blue, Alpha), with the Alpha channel representing opacity.
How to calculate image size
This oversimplifies what happens but you get the idea; RGB doesn't send like this; it actually sends by binary. RGB(255,0,0) represents 111111110000000000000000, which is a bright red color. And obviously, the larger the bit depth the larger the file size, because bytes are the size of everything on your computer, KiloByte, Mega, Tera, since in RGB we have 3 bytes then 1 pixel is equal to 3 bytes of data, there are 128x128 pixels and therefore the image size is 128x128x 3bytes= 48 Kilo Byte or 393,216 Kilo Bit.
 |
Another example of how pixels are affected by bit depth |
In summary, an image is essentially a data file containing information about its type, resolution, and color representation (bit depth). This data is interpreted by the graphics processor, which determines how each pixel is displayed based on the stored color and resolution details. This seamless process transforms binary data into the visuals we see on our screens, showcasing the power of digital representation.
 |
How the pictures metadata (dimension/bit-depth) are saved |
This is how the image describes itself: dimensions, bit depth and the dpi.
Moreover, DPI (Dots Per Inch) refers to the number of printed dots contained within one inch of an image printed by a printer. PPI(Pixels Per Inch) refers to the number of pixels contained within one inch of an image displayed on a computer monitor.
SONY.
What is PPI
PPI (Pixels Per Inch) plays a key role in how images are displayed on screens. For example, imagine an image with a resolution of 128x128 pixels displayed on a monitor with a resolution of 1920x1080 pixels. If the PPI is set to 1 (hypothetically), the image will appear very small on the screen. However, when you zoom in on the image, it fills more of the screen.
So, how does this work despite the image's small dimensions? When you zoom in, the image is scaled to match your screen's resolution. This means that the number of pixels per inch on the screen increases, causing each image pixel to take up more screen pixels, which can make the image appear larger.
To make the image more visible, the computer scales it, but this process can reduce the image's quality unless it is a vector graphic, which can be scaled without losing resolution. I hope this clears things up!
How does an ArrayBuffer represent an image (Javascript)?
An ArrayBuffer in JavaScript represents an image by storing the binary data of each pixel in the image. Essentially, an image is a grid of pixels, and each pixel's color is represented by bits. The size of this grid determines the resolution of the image.
When an image is saved to an ArrayBuffer, the buffer holds the binary data for every pixel. The more bit-depth (the number of bits used to represent each pixel), the more memory the ArrayBuffer requires. For instance, a higher bit-depth allows for a greater range of colors but also increases the file size.
The maximum size of an ArrayBuffer is 2GB (approximately 2,147,483,647 bytes). This is sufficient to store approximately 85 images with 4K resolution (3840 x 2160 pixels) at a 24-bit color depth.
Thank you for reading.
0 comments:
Post a Comment