Jacob Eiting


I'm Jacob Eiting.

Archive / RSS
Sep 10

Displaying Animations in OpenGL on iOS using Bink and Shaders

I’ve discussed before the issues with fluid 2D animation in an iOS game before. Memory, load time, bundle size, and hardware limitations all contribute to making traditional sprite sheet animation on iOS difficult. We were able to just get by with them in the early days of MinoMonsters. Then the evolutions happened.

They got bigger

Suddenly we needed more space for our monsters. With the hard limits presented by sprite sheet animation, we couldn’t fit more than half of an evolution animation on a full size texture sheet. The device simply could not bear the memory load required for monsters of this size. A different approach was needed.

One Word: Video

Each animation is taking up too much memory. We are holding every frame of the animation in memory for the entire duration although only one frame is visible at a time. The contents of a single frame make up only 3% of the total texture sheet area. That is 97% wasted memory. How can we cut down on this waste?

Video.

There has been a lot of engineering effort invested making computers capable of playing video. What does video actually mean from a software perspective? Video playing pipelines work something like this (my understanding of it at least, I’m probably wrong):

Video Pipeline

This process occurs each frame, the next frame is pulled from the video file and decompressed, this often involves applying diffs pulled from the video file to the previous frame, the video data is then in a state of several images that represent the different channels of the final frame (i.e. RGB). The color planes are then recombined using either built in hardware or in software. The resulting image is then sent to the display. This process repeats at the framerate of the video, producing a smooth image. The advantage of a system like this is that each frame is pulled from the video file on demand. The only memory required for video decompression are few buffers to hold the color planes and a destination buffer for the final composite. More on this later.

A video codec is not something that you can just whip up from scratch, at least not in a startup environment. The system that compresses, stores, and decompresses the image data must be heavily engineered in order to remain efficient yet small in file size. This was not something we could do on our own.

Enter Bink

There are tons of video codecs out there. Apple’s iOS APIs support about 20 of them out of the box. iOS even has many built in facilities that take advantage of the h.264 hardware on the phone for fast video decompression. Unfortunately we ran into a major issue with most of these formats, the lack of alpha support. Most video codecs were never designed with transparency in mind but MinoMonsters requires it for compositing monsters with the background environment. I then recalled something I’d seen on the back of retail game boxes and on splash screens before:

Powered by Bink Video

Bink Video. What is this mysterious framework who’s brand I recognized instantly? After reading the marketing copy it looked very much like our solution. A self contained video codec that supports video with alpha, optimized for games AND they had just released an iOS versions. Yes, please.

What Bink Does

Bink is made up of a proprietary video format and codec. The codec can be embedded directly into your application via a static lib, avoiding any dependencies on iOS versions. The C framework then gives you facilities for opening these files, decompressing them, compositing them on the CPU and some examples on how to get the final products on to the screen (which is very platform dependent). After verifying the resultant video files would be small enough, we proceeded with implementing a new rendering system using Bink.

The Bink Pipeline

Our first attempt at rendering with Bink looked something like this:

CPU Bink Rendering

Many video frameworks don’t maintain the video in RGBA channels instead opting for a lesser know Y cR cB A representation. This representation consists of a luminance channel, two color channels, and an alpha channel. The reasoning behind this being the human eye can not perceive compression in the color channels as easily as it can in the luminance channel. This allows the 2 color channels to be compressed more heavily and even to be stored at a smaller resolution then the final video without sacrificing quality.

The first step in rendering a Bink video is decompressing these color planes. Below are examples of the separated color planes for a frame of a MinoMonsters animation:

LuminancecR

cBAlpha

These images then have to be recombined in order to produce the final resultant frame. The simplest way of doing this is by combining them using built in Bink methods which do the compositing on the CPU (utilizing some of the vector hardware in the iPhone processor):

Composed

Following CPU composition this image then has to be uploaded to the GPU for display in our OpenGL context, no problem.

The Performance

Well, there was one problem. This approach was slow. We were now asking the CPU to decompress a video file and then to recombine it, pixel by pixel, before uploading it to the GPU. The CPU based approach is plenty powerful if you are only displaying a single video. In MinoMonsters we are always playing at least 2 videos and sometimes 4. We also started to run up against the limits of GPU bandwidth as we were uploading a full texture every frame. But there was another way.

Loving the GPU

The GPU is built for this kind of work, taking data buffers and operating on them in parallel. Using the GPU to recombine the image, the pipeline becomes slightly modified:

Shader Pipeline

The key difference here being: We upload each color plane to the GPU as a separate texture. Then, through the powers of OpenGL, we recombine the frames into a render texture which is already on the GPU. We are then instantly ready to display this frame.

YcRcBA Compositing in OpenGL

Recombining the color planes in GLSL is fairly straight forward process. It can be entirely represented with a matrix multiplication and a post bias addition. After several iterations, I found this to be the most efficient method of transforming the color vector in GLSL on iOS:

Each color component of the YcRcBA vector is pulled from its individual texture which have been uploaded to the GPU straight from Bink. The components are then combined into a vector and transformed. The alpha component does not need to be processed and goes straight to the resultant fragment. The fragment color is then passed down the rendering pipeline to eventually be ‘rendered’ to a framebuffer bound to a texture. This allows us to use the result in rendering without a download and upload step, saving considerable amounts of time.

Pure Savings

In my previous post I explored the memory costs of using traditional sprite sheet animations. An animation in that system typically ran upwards of 16 MB of memory. We can compare that to Bink.

Let’s examine a worse case scenario: an image that takes up the entire screen (as some of our attacks do!). The Video file itself is fairly small, on average of about 500KB. We open the entire video file into memory to reduce filesystem read times. The next memory budget item is the buffers Bink requires to do decompression. Bink requires 2 buffers for each color plane due to the nature of the Bink decompression algorithm. The luminance (Y) and alpha planes are 960x640 and 8 bits per pixel, for 614k each. The color planes are only half the resolution of the luminance and alpha planes, 480x320 at 8 bits per pixel, for 153k each. This is a total of 1.5 MBs per frame buffer. Bink requires 2, so that brings us up to 3 MB. 3MB is great but we aren’t quite done yet. Each animation has 4 associated textures on the GPU for uploading the color planes. These make up another 1.5 MB. The final memory requirement is the destination texture, at 960*640 and 32 BPP this knocks us back another 2.4 MBs. This brings us to a grand total of 7.5 MBs total memory for a full screen animation. This is the absolute worst case scenario and we’ve achieved a 56% reduction in memory usage!

This sizable reduction was enough to eliminate memory crashes for the vast majority of our users. I am convinced that it played a big part in pushing our app to the 5 star threshold.

The Costs

All these savings aren’t free, aside from the Bink licensing fee (which was not bad), we have taken a lot of load off the systems memory and pushed it onto the CPU and GPU. This did lead to a marked decrease in frame rate on the iPhone 4 and 4th generation iPod. Is it still playable? Absolutely, but you can feel it. We squeezed as much performance as we could out of these devices but we just aren’t able to push back up to that 60 FPS gold standard. Our compromise was to lower the target framerate to 30 FPS. This gives us smoother overall framerate which, I think, is preferable. The iPhone 4S has no problem keeping which gives me faith that as new hardware is released, frame rates will once again reach 60 FPS.

Solving Problems

Implementing a Bink based rendering system was no small feat for me and our team. My personal understanding of OpenGL was basically zero before embarking on this journey. Many concepts had to be learned from first principles (read: the red book). The going was slow but we emerged out the other side with something that solves our problems. This is what we do as software engineers. Sometimes there is a better way. You just have to figure it out.

Acknowledgements

This project was one of the most ambitious engineering efforts I’ve ever done. It would not have been possible if not for the rest of the MinoMonsters team, for clearing the road for me, letting me dive into the bowels of the OpenGL stack. Also a big thanks to the team at Bink, they were nothing but helpful and responsive to my questions during the integration process.


Sep 6

Programming is Hard

How do I learn to program?

As a software engineer I am often approached by people with the question “How do I learn to program?”. It fills me with an excitement that is hard to describe. I get the privilege of revealing to someone the agony and ecstasy of building software. From here I jump into the usual diatribe, discussed later, about how to get started. The wannabe student typically walks away with a mixture of excitement and dread as I have revealed to them that there is no secret. The only fact that anyone needs to accept is this: Programming is hard.

How hard?

Any new ‘trick’ that a person wants to learn can generally be assigned some difficulty level. At the low end of the spectrum we have tasks that are trivial for adults, these would include things like washing dishes, sorting objects, simple cleaning tasks, basic manual labor. These things can be taught in minutes if not seconds. Beyond that we have tasks which may take a few hours to learn, driving a stick shift, driving a boat, folding clothing, operating a machine press. These tasks, while trivial, increase in their nuances and sophistication. A working ability for these skills can be developed in an afternoon. Beyond this, we have skills that take days: landing an airplane, parallel parking, juggling, riding a unicycle, carpentry. These are skills that can be developed to a basic, functional ability over the course of a few days to a week. Programming, on the other hand, will take you months.

What makes it so hard?

This is a good question. I’m not entirely certain why I’ve watched so many people struggle in the infancy of their learning but I think the core problem is the inability to create visual/physical analogs for what they are doing. In all of the minute/hour/day tasks listed above, they are manipulations of systems that the learner has direct visual or tactile feedback from. Pushing this thing, moves that thing. Shifting to this gear pushes me into my seat. Every action is met with a flood of stimulation that allows the brain to more quickly adopt pattern recognition (also known as learning). With software, especially in the early stages, the effects of ones actions are much more obscured. Perhaps it is an interface issue, text is not the most informative I/O yet is the interface of choice when learning to code.

What takes a lot of people off guard is that they are expected to solve problems in this abstract space. Give someone a physical puzzle, like getting a couch out a door thats too small, and they will excel. Remove the physical couch and the door replacing them with programming abstracts and suddenly the person doesn’t know where to begin. The brain is lost in the void of signaling for what is actually happening inside the machine. Programming is not just learning to solve problems; it is chipping away the mystique of the machine and developing an understanding of the consequences of your inputs.

How long until I am good?

A Case Study

Out of my own interest I’ve taken a look at my own experiences to figure out exactly when I reached the tipping point of being less confused about writing code.

  • Fall 2005: I receive an undergrad research position in college, spend a few weeks for the better part of a year getting sockets to work in C. (100 hours)
  • Summer 2006: I receive $500 bucks to write a ruby on rails page for a realty company, the quality is really low. (40 hours)
  • Fall 2006: Begin taking computer science classes, many tiny projects. Still so confused. (200 hours)
  • Summer 2007: Land an internship at Cisco, spend summer writing test scripts in TCL for IP phones, starting to slightly understand things. (200 hours)
  • Fall 2007: Return to research position and being writing a real time PubSub system in python, wrote tests, think I am getting things. (500 hours)
  • Fall 2007: Launch of facebook platform, wrote a handful of crappy facebook apps, hosted from parents’ basement. (50 hours)
  • Winter 2008: iOS SDK announced begin working on analytics tool for iOS apps which become AppLoop. (500 hours)
  • Summer 2008: Published Paddle Ball app on the app store and sold off some other apps I wrote. (200 hours)
  • Fall 2008: Return to research project and begin building complete telescope control systems on top of my PubSub system. (500 hours)
  • Spring 2009: Land first real job as a programmer.

I think some of these numbers may be gross under estimations (I spent A LOT of time on AppLoop) but before I was able to feel comfortable taking a salary for producing code I had in the neighborhood of 2200 hours of experience. I think I finally started to ‘get it’ around hour 1700, before that I was just stabbing around, typing until I got results. For comparison, I received my pilot’s license with only 60 some hours of training. Twenty two hundred hours is a lot of time, not quite Gladwell’s 10,000, but getting there. It takes time, a lot of time. Left out of this was the fact that I was simultaneously minoring in computer science, which added an un-calculated amount to my hours.

100 Bad Projects

An artist friend of mine once relayed to me a quote:

"Everyone has a 1000 bad drawings inside of them. The sooner we get those out, the sooner we can start making better drawings."

The same holds in programming. Maybe 100 bad projects but there is nothing you can do but get them out of you.

Between my work, homework, and research I was probably at about the 100 project mark when I started be a little less confused about what I was doing. There is nothing I can press more on the new learner than to try and push through these 100 projects as quickly as possible. Build a little funny script, a twitter bot, a single serving site, a simple iPhone app. All of these projects are within the reach of the fresh-out-of-tutorial-land newby, you just have to grab them and run. Then keep grabbing. A hundred times.

My Typical Spiel

When someone comes to me with the nebulas I want to learn to program I usually send them some variation of the following theme:

  • Get Involved
  • Get Learning
  • Get Making

Get Involved

Start reading hacker news every day (not all day), even if you don’t understand it. Observe the hacker in its natural environment, try to keep tabs on things you see a lot of: JavaScript, Rails, Clojure, RabbitMQ. Go to user groups, CocoaHeads or for ruby. Get immersed in the field. If you want to be a programmer, start hanging out with them. Convince yourself you are a programmer.

Get Learning

The tools out there are just getting better and better for learning how to code. Codecademy can teach the basics. Getting a good book and working through it will jumpstart the process. I can’t emphasize enough the working through it part. Most books have challenges, some so complicated it may even knock out one of our 100. Getting your money’s worth from the books is highly recommended. My favorite books for learning iOS are by Hillegass. On the web side is seems this tutorial is really good. Pick one and finish it. No matter how confused you are. Keep doing it until it makes even a little sense. Devise your own challenges. Push yourself.

Get Making

Nothing is going to get you closer to being a programmer than writing code. No book can teach like the experience of realizing the errors in your ways. Every new project is a chance to avoid the mistakes of the past and do it right. This process never stops. Make coding a part of your routine. Eliminate distractions that allow you to do anything else.

Programming is Hard

There is no way around it. There is a reason the demand for programmers exists. Keep pushing and keep making.


Sep 5

The Problem with Animation

Our Style

I’ve been working on MinoMonsters for over a year now. From the beginning we knew we wanted to have full motion, hand drawn animations. Giving our monsters that lush, Disney quality that hadn’t been seen on iOS at the time.

No problem, right?

The only approach that would seem to support our stylistic desires was rasterizing our animations out frame by frame. Taking the raw animations files (created in Flash) and exporting each file as a static PNG. We then would take the PNGs and reassemble them using TexturePacker to create texture sheets. Poking around, this seemed to be the standard for 2D animations on iOS (especially with coco2d).

We would run our SWFs through TexturePacker and out the other side we would end up with something like:

Texture sheet.

Every frame of the animation is placed into the same final image. This is done so that we only have to upload the animation onto the GPU once and can then display each frame in rapid succession, simply telling the GPU from where in the texture to draw our sprite. This gives texture sheet animations the advantage of having essential zero cost of advancing frames. We had done it. We had brought our monsters to life, but at what cost?

The Costs

The OpenGL ES implementation on iOS has a few limitations that effect the way textures can be used:

  • Textures must be an even power of two in dimension. (i.e. 512, 1024, 2048 pixels)
  • Textures must be square. (This requirement was only on older, 2nd gen iOS hardware)
  • Textures must not be larger that 2048 by 2048. (half that on SD hardware)

These constraints result in two conditions: either our textures will need empty space to satisfy the first two constraints or they will be too large to fit in a 2048 by 2048 space. Issues, but not deal breakers.

Sprite sheets are fast to animate at the cost of memory. How much memory? Well it depends on two factors:

  • The size of your texture. (i.e. 2048x1024)
  • And the format of your texture. (8 bits per pixel, 16 bpp etc.)

Animations running up to three seconds can easily need a 2048 by 2048 texture sheet. For our animations, we restricted our artists to using a limited palette of colors (meaning no gradients). This allowed us to use a limited color space, reducing the bits per pixel by a factor of 2. For each channel of color we used 4 bits (rgba), this gives us a 16 bits per pixel. A quick calculation shows that 2048 * 2048 * 16 ~ 67 MBits ~ 8 MB. Each animation that is open has the potential of using upwards of 8 MBs of memory. On an iOS device, this is a significant portion of the memory available for applications.

When a texture is opened from disk we must first open it into application memory to unpack it from the texture file. We then upload it to the GPU. While uploading the texture to the GPU we now have 8 MBs in both application and video memory allocated for a total of 16 MB of memory for just one animation. A spike of 16 MB is more than enough to set off iOS’s memory warning system. Leading to warnings, or worse, jetsams. (This WWDC video does an amazing job of covering how memory management works on iOS.)

The Crashes

We proceeded to launch with this architecture and unsurprisingly we heard from a lot of our users something like:

"This game is amazing … even with all the crashes."

Every one of these reviews was like a punch in the soft spot. Didn’t they understand that their app was being jettisoned? It wasn’t my fault, images are big, they take lots of memory! It worked fine on iPhone 4 (lots of memory) and iPhone 3GS (a quarter the resolution) but we were really getting hammered on the 4th generation iPod which was blessed with 4 times the resolution of its predecessor but only twice the memory. We knew why these crashes were happening, but I felt my hands were tied to do anything about it.

The Constraints

Beyond the piss poor app stability, our artists were having to work in an artificial box. In order for animations to fit in the required size constraints they often had to be adjusted or sized down. Want to do a full screen fireball? Sorry, that will take up too much space on the texture sheet. Our artists had to constantly be considering this nebulous limitation imposed by a 2048x2048 packed texture sheet (nearly impossible to judge roughly). Also, not only were the length and bravado of our animations limited, the ‘physical’ size of the monsters was limited. Meaning larger monsters, simply would not work. Sorry fatty…

Fatty was too big.

2D Animations Sucks

The bottom line, 2D animation in a game is hard. Unlike a 3D game where you texture a model once, then sling it around like a digital marionette, 2D animation requires pixel perfect crafting for every frame. Every new angle, new sizing, or even new placement of your art requires reprocessing from raw assets. This can quickly lead to bloated app memory, reduced stability, and oversized bundles.

Dr. Shaderlove or: How I Learned to Stop Worrying and Love the GPU

In part three I will discuss the techniques and technology we used to solve this problem. (Hint: It involves bending the GPU to our will.)


Aug 30

Love the GPU

For many programmers the GPU is a part of the system shrouded in mystery. A realm best left to heavyset bearded types who toil away in the deep recesses of the gaming industry. With all the abstraction on top of the GPU, all but a few programmers can live out their daily lives without so much as a thought about how the things they make are actually rendered. Living behind these abstractions is fine, until you reach a limit, usually one of performance. When reaching these limits it can be really helpful to have a basic understanding of the GPU.

My experience is solely working with OpenGL, I sort of see the GPU and OpenGL as one in the same. I attribute this to my only hands on experience with graphics hardware being carried through OpenGL commands. If I have committed some gross semantic fallacy please call me out on it, nerd.

Function

The function of the GPU is very simple at a high-level. It provides an interface for the computer (CPU) to display colors in a grid on a display. The GPU at its most basic level takes commands from the CPU and sends pixels to the display. How the GPU sends its data to the display is not of much interest to the developer as it is extremely platform dependent and basically inaccessible. The CPU side of the GPUs function is where we take interest. 

Interface

Because of the diversity of hardware implementations of GPUs, one basic abstract is usually held. That is that the GPU and the CPU are separate entities in which access occurs over a client/server model. In this case the GPU is the server and the CPU the client. Anytime the CPU wants something out of the GPU it sends a message, usually asynchronously. Anytime the CPU wants data (images) to be drawn to the screen, it must first send it to the GPU. This process is known as upload. In general the GPU and the CPU do not share memory (though they may share the same physical memory, access is limited). Anytime the CPU wants to share data with GPU it must be uploaded. This is one of the first ‘aha’ moments I had when working with the GPU. Anytime your app opens an image file and wants to draw it, that data must first be pulled into memory, converted into raw bitmaps, and uploaded to the GPU. Want to change that image? Fine, but first we must upload a change to the data. Nothing the CPU can touch directly can have an effect on what is being drawn. This leads to some interesting design constraints.

Architecture

When opening up a book on OpenGL for the very first time the reader is often met with a diagram that looks like this. Which an expert on the internals of a rendering pipeline may consider essential to using the APIs. This massively complicated system can be broken down into a few key ideas that help demystify the inner workings of the OpenGL rendering pipeline. OpenGL has some basic abstractions that get used over and over again. The most important of these is probably the buffer. A buffer is just an abstract for data storage, usually arranged in a visual way (i.e. x and y). One of the very important buffers in the pipeline is the framebuffer. Framebuffers are encapsulated rendering destinations. There is typically a default framebuffer which renders out to the screen. An application can have several framebuffers used for drawing to textures as well. More on this later. The second of the important objects in the GPU is the texture. A texture is simply a raw data buffer, living in the GPU. Textures most commonly come from data uploaded by the GPU but can also be generated by the GPU itself.

Drawing

How does the drawing actually occur? Well, for the most part, each ‘frame’ the CPU issues a bunch of commands, that boil down to, draw this texture here, draw that texture here, etc. (things are different in the case of a 3D application but similarly simple). The CPU binds to the framebuffer it cares about, issues commands to draw certain textures or primitives, and thats it. If a single framebuffer is being used, the rendering loop is just that, bind, draw, display, 60 times a second or so. If multiple framebuffers are in use the order of things may go something like, bind framebuffer that renders to a texture, draw things, bind main framebuffer, draw rendered texture, present main framebuffer. 

Shaders

So we have buffers of image data on the GPU (images) and we have a place to draw them to (framebuffer) how does the pixel data in texture actually make its way to the framebuffer? Once the positioning and depth of the contents of pixels in the destination buffer have been decided, the GPU uses (on a programmable pipeline at least) a small program called a shader to determine what color actually ends up in the destination buffer. Shaders started out as assembly programs written for specific graphics hardware to control the behavior of the rendering pipeline at the lowest level. OpenGL Shading Language (GLSL) allows the developer to use a C like language to tell the GPU exactly how every fragment should be processed. This is where the magic of getting the GPU to do what you want lives.

Bending the GPU to Your Will

Lets say you have a 2 2D arrays of data you need to perform an operation on. Say you need to, for simplicity sake, average the corresponding fields in each array. If you were to do this on the GPU it would look something like:

for (int x < WIDTH) {

  for (int y < HEIGHT) {

    result[x][y] = (array1[x][y] + array2[x][y])/2;

  }

}

This would get the job done, while blocking the CPU, not taking advantage of several cores etc. You could rewrite this to take advantage of multiple CPUs or you could just let the GPU handle it, the CPU is built for doing operations on multidimensional data.

To do this same thing on the gpu you would first upload each array as a ‘texture’:

glTexture2D(array1);

glTexture2D(array2);

You would then render an object of similar dimensions as your array, and use as a fragment shader something like this:

glFragcolor.r = (sampler2D(array1).r + sampler2D(array2.r))/2;

Now, if we rendered this to a texture we could then download the texture from the GPU and bam, we have our array of averaged data. 

This example is a little contrived, but the GPU is an amazingly powerful piece of hardware that is often underutilized. Doing image manipulations, realtime video processing, or crazy data crunching can take advantage of the power of the GPU.

Further Reading

If you are interested in getting deeper into the OpenGL stack, start with the red book. It didn’t really make sense to me until the 4th of 5th time cracking into but it is really thorough and covers OpenGL and graphics hardware completely. 

For some cool tricks with shaders you may want to checkout the GPU Gems series of books. Amazing tricks and hacks for bending the GPU to your will.

Stay tuned, in a future post I will reveal how, by bending the GPU to our will, MinoMonsters is able to bring 4 monsters to life in full animation on a mobile device.


Dec 14

Merry Christmas


Aug 24
Jake and Sarah are in Europe

Jake and Sarah are in Europe


Jul 12

A Man After My Own Heart

[Steve] draws a rectangle. ‘Here’s the new application,’ he says. ‘It’s got one window. You drag your video into the window. Then you click the button that says burn. That’s it. That’s what we’re going to make.’ “



Jul 9

Customer Service

Every morning I wake up and go downstairs to the coffee shop in my building. I order 1 iced coffee and 1 banana. Today, as I was walking away, nomming on my banana I soon realized that the inside of said banana was black, rotten, and inedible. I usually don’t complain about a minor disappointment in food service because it usually means a lot of work for the staff. If say my dish was not to my liking exactly, I am not going to demand another dish. Who cares, its still food.

But this was different.

I had ordered a banana which would be only food for the next 4 hours or so. Without this banana I was likely to have a terrible morning of zombiness and hunger. I returned to the counter and kindly showed the barrista that, after biting the first few inches off my banana, the center of the banana was rotten. Not mushy, extra sweet, over ripe rotten but the kind of rotten that makes you expect a worm to emerge. 

"Hey, this banana’s rotten in the middle, can I grab another?"

"WHAT? AFTER YOU ATE HALF OF THAT ONE?"

Artist Depiction"I’m sorry?", I say, slightly confused.

"Yeah take one, its fine, don’t worry about it."

"Ok…", I sheepishly grab what looked like a fresher breakfast banana. 

Why do I feel like the asshole here?

You sold me a rotten banana, which, who cares. Its hard to guarantee fruit, but, do not sell me rotten fruit and then scowl at me for kindly asking for replacement. 

Kiss your customers ass. They are your boss. 

I hope Buckeye Donuts sells bananas. 


Mar 22

String Theory

"If we zoom in far enough, we see that the particles are actually little rubber bands." - Complete Idiots Guide to String Theory

This is so wrong. This implies that there is some magical phenomenon that would allow you to directly observe the physical structure of a string. It cannot be observed, so don’t imply that it can. You should say that particles act as if they are made up of tiny strings vibrating at different frequencies. 


Jan 28

SEKURITY!!!!!!!

I get this emails every couple of weeks, they crack me up.

Sometimes I get emails about space security, those are even better.

Read More


Page 1 of 10