The Adumbration

The Adumbration lumbered in the distance, outline clouded by horizon’s haze.

“What is it?” the countryside cried, “What will it mean?” the elders worried.

Its dusty gray obscuring more and more into the clear blue sky, the Adumbration loomed.

“What should we feel?” the townspeople asked, “Every man think for himself!” the scattering elite replied.

Indiscernible, nearly there but not yet here, almost not yet a thing unto itself, the Adumbration rust the land.

“We must know if we are to go on!” the people plead, “We can’t really tell.” duly unspoken.

Coming still, roiling yet becalmed, ephemeral but always, the Adumbration was unseemly.

“We will distract ourselves with seeking” some said, “We will distract ourselves with providing.” said some.

Almost invisible the Adumbration stayed.

“Now we are certain!” proclaimed the everyman. “Now we are content.” thought his mind.

The Adumbration continued.

What would happen to me if I fell into a Black Hole?

A Black Hole

It’s safe to say you wouldn’t survive the trip, so stay on this side of the Event Horizon if you ever want to be seen again.

Black Holes are massive objects occupying a tiny volume in space. When a super-massive star (many times larger than the Sun) stops sustaining enough nuclear fusion at its core to support its size, its mass may collapse into itself and form a Black Hole, sucking in everything around it. Our Milky Way galaxy spirals around a super-massive Black Hole at its center.

Since nothing can escape from the incredible gravitational pull of a Black Hole (even light itself), scientists can only speculate on what would happen to a person falling into a Black Hole. However, it does seem evident that the crushing gravity would not be kind to your body. Soon after passing the Event Horizon, the point of no return, your body’s atomic structure would be ripped apart. The parts of your body closer to the singularity experience a stronger gravitational pull than the parts of your body further away from the singularity. This “tidal gravity” creates a differential gravitational pull on your body that literally stretches you out as you fall in. Alas, the rack of space-time is unforgiving to even the most pliant mind, and ultimately it’s impossible to keep yourself together.

Interestingly, getting sucked into a Black Hole doesn’t necessarily mean all trace of you is permanently erased. Physicist Stephen Hawking recently theorized that Black Holes emit minute amounts of radiation energy, like quantum information signatures of the stuff pulled in. So-called “Hawking Radiation” retains the information characteristics of the stuff that the Black Hole gobbled up; if a carbon atom gets sucked into the Black Hole, eventually the energy equivalent of a carbon atom will spew back out as Hawking Radiation. The information of the universe is conserved. So, at least in quantum theory, you could be reconstituted bit-by-bit if an outside observer were able to interpret Hawking Radiation and piece you together.

Cosmologist Ted Bunn’s Black Hole FAQ (Frequently Asked Questions) offers many expanded answers: http://cosmology.berkeley.edu/Education/BHfaq.html

Simpsons Paradox

Consider the following cartoonish thought experiment:

Homer and Lenny get into a week-long grudge match at the nuclear power plant over which of the two can eat more donuts. They decide to settle it once and for all in a weekend donut-eating contest: each of them gets 100 donuts, whoever eats more by the end of the weekend wins.

Lenny secretly knows Homer will be able to out-eat him, but he also knows something about statistics that he’s hoping Homer doesn’t. Lenny suggests, and Homer agrees, that Lisa will be in charge of moderating the match to keep things fair.

Lisa will buy 100 donuts each morning and divide the 100 donuts into two boxes, one for Homer and one for Lenny. After setting the two boxes out for the day, Lisa will return periodically to mark the percentage of each box’s donuts that have been eaten. In this way Homer and Lenny know how they’re faring against each other and each can adjust their eating behavior over the day to try and keep up with the other.

Here’s how it goes down:

    1. On Saturday, Homer eats more of his box of donuts than Lenny eats of his box of donuts.
    2. On Sunday, Homer eats more of his box of donuts than Lenny eats of his box of donuts.
    3. On Monday, Homer is shocked to find that Lenny has won in the final tally by over a dozen donuts!

    Wait, how did that happen? Lenny didn’t cheat and Lisa didn’t divide them unfairly; each got the opportunity to eat 100 donuts. So why didn’t Homer win when the percentages always showed him in the lead? Lisa divided each daily allotment of 100 donuts into a box of 90 donuts and a box of 10 donuts. Lenny won by weighting.

    Simpson’s Paradox, as explained by a singingbanana:

    Why is Simpson’s Paradox important to remember in the real world? As mentioned in the video above, direct percentage comparisons of weighted data is a risk in any field using statistical analysis, especially the social sciences. Failing to consider the meaning of statistical percentage comparisons can lead to less favorable outcomes given seemingly favorable supporting data. If you want your doctor to pick the best medicine (Drug A) for you, you’d better hope he gets the proper recommendation from the groups running the statistical analysis first. Otherwise you may get worse medication despite the availability of a more effective alternative, and neither you nor your doctor will be the wiser.

    Statistical analysis is an important way to get a holistic understanding of phenomena, but interpreting test results isn’t as easy as it looks; sometimes you miss the holes staring you right in the face.

    Questioning the Answers

    Archimedes

    Why would computers deprive us of insight? It’s not like it means anything to them…

    Surreal story time! The setting: Cornell University. Fellow scientists Hod Lipson and Steve Strogatz find themselves thinking about our scientific future very differently in the final story of WNYC Radiolab’s recent Limits episode. In the relatively short concluding segment, “Limits of Science”, Dr. Strogatz voices concern about the implications of automated science as we learn about Dr. Lipson’s jaw-dropping robotic scientist project, Eureqa.

    http://www.wnyc.org/flashplayer/mp3player.swf?config=http://www.wnyc.org/flashplayer/config_share.xml&file=http://www.wnyc.org/stream/xspf/149570

    I can relate with Steve Strogatz’ concern about our seemingly imminent scientific uselessness. But is there actually anything imminent here? Science is the language we use to describe the universe for ourselves. Scientific meaning originates with us, the humans that cooperate to create the modal language of science. What are human language or ‘meaning’ to the Eureka bot but extra steps to repackage the formula into a less precise, linguistically bound representation? If one considers mathematics to be the most concise scientific description for phenomena, hasn’t the robot already had the purest insight?

    Given the sentiments expressed by Dr. Strogatz and Radiolab’s hosts Jad and Robert, it’s easy to draw comparisons between Eureqa and Deep Thought (the computer that famously answered “42” in The Hitchhiker’s Guide to the Galaxy). Author Douglas Adams was brilliant satirist as much as prescient predictor of our eventual technological capacity (insofar as Deep Thought is like Eureqa). The unfathomably simplistic answer of “42” and the resulting quandary that faced the receivers of the Answer to Life, the Universe, and Everything in HHGTTG is partially intended to make us aware that we are limited in our abilities of comprehension.

    More importantly, it shows that meaning is not inherent in an answer. 42 is the answer to uncountable questions (e.g. “What is six times seven?”) and Douglas Adams perhaps chose it bearing this fact in mind. Consider that if the answer Deep Thought gave was a calculus equation 50,000 pages long, the full insight of his satire might be lost on us; it’s easy to assume an answer so complicated is likewise accordingly meaningful, when in fact the complex answer is no more inherently accurate or useful in application than the answer of 42.

    Deep Thought

    The Eureqa software doesn’t think about how human understanding is affected by the discovery of formula that best describe correlations in the data set. When Newton observed natural phenomena and eventually discovered his now eponymous “F = ma” law, he reached the same conclusion as the robot; the difference is that Newton was a human-concerned machine as well as a physical observer. He ascribed broader meaning to the formula by associating the observed correlation to systems that are important for human minds, the scientific language of physics, and consequently engineering and technology. A robotic scientist doesn’t interface with these other complex language systems, and therefore does not consider the potential applications of its discoveries (for the moment, at least). 

    Eureqa doesn’t experience “Eureka!” insight because it isn’t like Archimedes, Man. Man so thrilled by his bathtub discovery of water displacement that legend remembers Archimedes as running naked through the streets of Syracuse. He realized that his discovery could be of incalculable importance to human understanding. It is from this kind of associative realization that emerges the overwhelming sense of profound insight. When Eureqa reaches a conclusion about the phenomena it is observing, it displays the final formula and quietly rests, having already discovered everything that is inherently meaningful. It does not think to ask why the conclusion matters, nor can it tell as much to its human partners.

    “Why?” is a tough question; the right answer depends on context. Physicist Richard Feynman, in his 1983 interview series on BBC “Fun to Imagine”, takes time for an aside during a question on magnetism. When asked “Why do magnets repel each other?”, Feynman stops to remind the interviewer and the audience of a critical distinction in scientific or philosophical thinking: why is always relative.
     

    “I really can’t do a good job, any job, of explaining magnetic force in terms of something else that you’re more familiar with, because I don’t understand it in terms of anything else that you’re more familiar with.” – Dr. Feynman

    Meaning is not inherent or discoverable; meaning is learned.

    Making Virtual Sense of the Physical World

    You’ll remember everything. Not just the kind of memory you’re used to; you’ll remember life in a sense you never thought possible.

    Wearable technology is already accessible and available to augment anyone’s memory. By recording sensory data we would otherwise forget, digital devices enhance memory somewhat like the neurological condition synesthesia does: automatic, passive gathering of contextual ‘sense data’ about our everyday life experiences. During recollection, having the extra contextual information stimulates significantly more brain activity, and accordingly yields significant improvements in accuracy.

    This week, Britain’s BBC2 Eyewitness showed off research by Martin Conway [Leeds University]: MRI brain scan images of patients using Girton Labs Cambridge UK‘s “SenseCam”, a passive accessory that takes pictures when triggered by changes in the environment, capturing momentary memory aids.

    The BBC2 Eyewitness TV segment on the SenseCam as a memory aid:

    The scientists’ interpretation of the brain imaging studies seems to indicate that vividness and clarity of recollection is significantly enhanced for device users, even with only the fragmentary visual snapshots from the SenseCam. One can easily imagine how a device that can also record smells, sounds, humidity, temperature, bio-statistics, and so on could drastically alter the way we remember everyday life!

    Given this seemingly inevitable technological destiny, we may feel the limits of human memory changing dramatically in the near future. Data scientists are uniquely positioned to see this coming; a recent book by former Microsoft researchers Gordon Bell and Jim Gemmell, Total Recall: How the E-Memory Revolution Will Change Everything, begins its hook with “What if you could remember everything? Soon, if you choose, you will be able to conveniently and affordably record your whole life in minute detail.

    When improvements in digital interfacing allow us to use the feedback from our data-collecting devices effortlessly and in real-time, we might even develop new senses.

    A hypothetical example: my SkipSenser device can passively detect infrared radiation from my environment and relay this information, immediately and unobtrusively, to my brain (perhaps first imagine a visual gauge in a retinal display). By simply going through my day to day life and experiencing the fluctuations in the infrared radiation of my familiar environments, I will naturally begin to develop a sense for the infrared radiation being picked up by the device. In this hypothetical I might develop over time an acute sense of “heat awareness”, fostered by the unceasing and incredibly precise measurements of the SkipSenser.

    Of course I’m not limited to infrared radiation for my SkipSenser; hypothetically anything detectable can stimulate a new sense. The digital device acts as an aid or a proxy for the body’s limited analog sense detectors (eyes, ears, skin, i.e. our evolutionary legacy hardware) and also adds new sense detectors, allowing the plastic brain to adapt itself to new sensory input. I could specialize my auditory cortex, subtly sensing the characteristics of sound waves as they pass through the air, discovering patterns and insights previously thought too complex for normal human awareness. I could even allow all of my human senses to slowly atrophy in favor of fomenting a set of entirely unfamiliar senses, literally changing my perception to fit some future paradigm.

    NASA Interferometer Images


    Augmenting our sensory systems isn’t new, it’s what humans are naturally selected for. Generally speaking, ‘tool’ or ‘technology’ implies augmentation. If you drive a car, your brain has to adapt to the feel of the steering wheel, the pressure needed to push the pedals, the spatial dimensions of the vehicle, the gauges in the dashboard. While you learned how to drive a car (or ride a bike), your brain was building a neural network by associatively structuring neurons, working hard to find a system good enough to both A) accurately handle these new arbitrary input parameters and B) process the information at a rate that allows you to respond in a timely fashion (i.e. drive without crashing). That ability to restructure based on sensory feedback is the essence of neuroplasticity; it’s how humans specialize, how humanity shows such diverse talent as a species.

    That diversity of talent seems set to explode because here’s what is new: digital sensors that are easy to use, increasingly accessible, and surpassing human faculty. Integrated devices like the SenseCam continue to add functionality and shrink in size and effort required, now encompassing a sensory cost-benefit solution that appeals not only to the disabled, but to the everyman.

    There may be no limits to the range of possible perception. Depending on your metaphysical standpoint, this might also mean there may be no limits to the range of possible realities.

    Vanishing Words Tell Illuminating Tales

    The Library of Congress set up a deal a few weeks ago to acquire Twitter’s complete archive of public messages. It’s not a particularly impressive number of bytes by itself, but it’s a goldmine for computational analysis. And that academic potential is behind the government wanting to obtain what might seem like a vast cacophony of meaningless chatter.

    In the WNYC Radiolab podcast released today, “Vanishing Words“, Jad and Robert look at linguistic computation. Specifically, the idea that you can identify and predict dementia using word analysis of personal history, say a collection of letters or diary entries. Or if you’re Agatha Christie, crime novels. If you’ve got a minute let Jad Abumrad & Robert Krulwich tell you about this:

    Working with Jad’s mention of “the age of Twitter”: online services like Twitter, Facebook, Google, and so on are quite earnestly working with words as scientific data; it’s a core element of staying competitive in their business. Computational language analysis is a fascinating field, and luckily it also seems to have powerful economic incentive.

    Word data is probably still the easiest way to directly get highly personalized information about a person (e.g. a status update, a tweet). Facebook Data Scientists, for example, work primarily to teach computer models to interpret the words used in Facebook status updates into meaningful demographic data. The computers gather information and the scientists pick out interesting patterns so that better, more personalized advertising can be served. Better targeted ads translate to actual interest in ads, which translates to business.

    Computational research and analysis (like the studies mentioned in this Radiolab podcast) is exploding commercially and academically, like a virtual internet gold rush. Supply is growing exponentially as hundreds of millions of people use online services to communicate publicly. Demand is blowing up too, because we’re realizing, like these scientists discovering something deeply personal about Agatha Christie, just how much we can learn from a simple collection of words.

    It’s exciting to consider how much we may be able to learn about ourselves using non-contextual information. Words unrelated to each other in everyday usage still form patterns unseen on a larger scale. Everything you do leaves a mark on the world, and soon we may be able to better understand our markings and appreciate our histories holistically.

    I imagine the future like learning the answers to questions we never thought to ask.

    Edit 5/11/10: Agatha Christie also wrote dozens of diary entries and notes about books that may have shown signs of dementia. (via @JadAbumradAgatha Christie’s deranged notebooks (interesing to read after the latest @wnycradiolab podcast) – http://bit.ly/ar2smX

    Edit 5/14/10: For an interesting exemplar of Facebook linguistic data-mining, see their Gross National Happiness trend index. The study describing the methodology used is cited below the chart.

    Needs Less Cowbell (Trololo Explained)

    In early March I thought about Eduard Khil and his new claim to fame as the Trololo Man thanks to viral internet sharing of his 1976 vocalization performance. The song, “I Am So Glad To, Finally, Be Returning Home“, was originally composed with lyrics, but composer Arkadiy Ostrovskiy and Eduard Khil together decided before showtime to strip the song of its content and replace the lyrics with vocalization singing, substituting vowel sounds for the words and resulting in the video you see today.

    So what happened to cause Arky and Edik to scrap the lyrics before showtime? Russian news org Life News asked Eduard:

    “Originally, we had lyrics written for this song but they were poor. I mean, they were good, but we couldn’t publish them at that time. They contained words like these: “I’m riding my stallion on a prairie, so-and-so mustang, and my beloved Mary is thousand miles away knitting a stocking for me”. Of course, we failed to publish it at that time, and we, Arkady Ostrovsky and I, decided to make it a vocalisation. But the essence remained in the title. The song is very naughty – it has no lyrics, so we had to make up something for people would listen to it, and so this was an interesting arrangement.” – Eduard Khil 14.3.2010

    Soviet Coat of Arms

    The Trololo video was filmed in 1976, a time when Russian media was widely and routinely censored by the controlling USSR regime. Though Eastern bloc censorship in the 60’s and 70’s had declined comparatively after the Khrushchev Thaw marking the end of Stalin’s oppressive rule during the first half of the 20th century, the communist-state controlled media still felt strong pressure to reinforce a Socialist Realism narrative and repress contrary narratives in the shadow of the lingering Cold War.

    Arkady and Eduard were no strangers to the arts in this environment. They knew that lyrics about a cowboy and his pioneer wife evoking vivid landscapes of the American West would immediately raise a red flag (sans hammer & sickle) for the television broadcaster. Knowing that censorship officials would almost certainly reject the song with seemingly pro-Western lyrics, the change-up to vocalization was the only viable option. A meme was born that day, but it would be decades before anyone knew it.

    In the 1980’s, about a decade after filming (and about two decades before the internet would remind the world of the video’s existence by dubbing it with the onomatopoetic sobriquet “trololo”), Mikhail Gorbachev, the last General Secretary of the Soviet Union, began implementing his policies of Perestroika and Glasnost. Glasnost, a policy of “maximal publicity, openness, and transparency in government”, brought with it freedom of information and new cultural freedoms for citizens of the Soviet states. This time of sweeping change under Gorbachev’s leadership eventually led to the collapse of the USSR and the emergence of modern Russia and the independent nations out of the former bloc states of eastern Europe.

    A Soviet stamp propagandizing Perestroika and Glasnost

    So time passes and along comes the Интернет, where some nostalgic chap casually drops the clip onto YouTube under its original Russian title, “Я очень рад, ведь я, наконец, возвращаюсь домой. It sits for a little while as an esoteric example of bygone Russian entertainment, until earlier this year. In February and March 2010, the video shot upwards in popularity when it was ‘discovered’ for its quirkiness and re-purposed as a ‘bait-and-switch’ comedic device (see Rickroll). Hundreds of spoofs soon spawned around the original video, and with the help of large audience propagators (e.g. The Colbert Report) the trololo internet meme was well on its way. The original clip on YouTube alone has quintupled to five million views since last month.

    Eduard Khil welcomed the sudden flood of attention after apparently first learning about the phenomenon from his 13 year old grandson, who purportedly came home from school one day whistling the tune and had to explain to his grandfather why this old song was popular now because of the Internet (it’s a series of YouTubes*).

    Partly due to the rush of Russian media attention, Eduard began making a handful of public appearances in mid-March, just weeks into the meme’s upswing. In a broad response to the often-asked question of lyrics, Eduard published a video address in which he suggests that his fans collaborate to write new lyrics for the song. His earnest proposition is testament to our global society’s relatively modern freedom to create and share with impunity. No governing body can truly censor media that they can’t predict or intercept. Cultural memes like trololo are exemplary of the explosions of creativity that happen when it is both easy to create and easy to share.

    So… be creative and prolific! Don’t forget how much power the Internet as a communicative medium grants you; even if there exist those who would censor you, you need not alter the fruits of your labor to fit another’s narrative.

    Why it feels like Easter time

    Two quick Easterly follow-ups to the thought a few days ago on April Fools’ Day as a holiday in celebration of the vernal equinox (i.e. spring).

    • The vernal equinox, I’ve since learned, can be considered either the ‘first day of spring’ or the ‘middle of spring’ for the northern hemisphere depending on your perspective (ground temperature change versus scientific equinox of when sunlight is at a precise midpoint on the earth’s surface, respectively).
    • It’s Easter! Why is it “Easter”? Easter is a critically important religious holiday for Christian faiths. So why not call it Resurrection Day (a few do), or the Festival of the Ascendance, or Jesus April Fools Day? According to the Oxford English Dictionary’s AskOxford.com: “Etymologically, ‘Easter’ is derived from Old English. Germanic in origin, it is related to the German Ostern and the English east. [Bede] describes the word as being derived from Eastre, the name of a goddess associated with spring.” So, at least in name if not spirit, Easter has strong ties to the season of spring.

    Ok, one more:

    • Easter Bunnies and Easter eggs came into the picture about a millennium and a half after the holiday got its roots, around the 1600’s in medieval Germany (the Holy Roman Empire). Originally, the German tradition of bringing eggs was not linked to Easter, nor were the eggs edible. America especially liked the tradition and adopted it from German immigrants (similar to the idea of Kris Kringle) and in the modern era the Easter bunny and colorful eggs are the ubiquitous symbols of a secularized Easter. This linking of imagery was not threatening to the Christian churches because bunnies and eggs are ancient symbols of fertility. From Wikipedia: “Eggs, like rabbits and hares, are fertility symbols of extreme antiquity. Since birds lay eggs and rabbits and hares give birth to large litters in the early spring, these became symbols of the rising fertility of the earth at the Vernal Equinox.”

    I’ll close with an intriguingly opposed perspective (so to speak) from an Australian social researcher, Hugh Mackay, on Easter:

    “A strangely reflective, even melancholy day. Is that because, unlike our cousins in the northern hemisphere, Easter is not associated with the energy and vitality of spring but with the more subdued spirit of autumn?” – Hugh Mackay

    All Fools Today

    Jester Mask

    Did you know that Americans originally created the Joker for playing cards in the 1800’s as the highest card for the game of Euchre? Juke, but no joke.

    It’s April 1st, and that means you’ve been made a fool. Not by me of course; the tidbit above is not prevarication despite your uncertainty. Nevertheless, you are being foolish. You look foolish right now and I can’t even see you! You act downright medieval you’re such a fool!

    (Please excuse the jester.)

    The exact founding of this “All Fools Day” on the first of April isn’t known, though it is known that the practice has a history going back hundreds of years with about as many theories as to its origin. Owing to its long history (and reasons I’ll detail), April Fools’ Day is very popular in Westernized countries around the world. Interestingly, in the traditional culture of some countries such as the UK, Australia, and South Africa, you’re only supposed to ‘fool’ before noon. If you prank someone after noon you’re considered an April Fool.

    April Fools’ Day thrives because we’re particularly ready for a goofy time of year, so to speak. With a physiological basis in enjoying the anticipation of unexpected thrill (à la dopamine), a mood of lighthearted puckishness meshes well with the time of year; moving from cold Winter indoors to the onset of sunny Spring outdoors. The months following the winter holidays are perceived as particularly dreary, so by the time April arrives people are anxious to celebrate the seasonal change. Thinking sociologically, it’s how societies celebrate the vernal equinox on a day that is easier to remember.

    Even though the jocundity we experience personally on this day isn’t always memorable, we are always eager to hear about clever pranks. Lists of both the well-known and best recent April Fools circulate widely every year as testament to this desire, and you’ll probably read at least one by the end of the day. Each communicative medium has its own class of hoaxes, from print to radio to TV. And now of course there’s the Internet, the most likely source of your prank news in the modern era.

    The desire and expectation of deception is problematic, however. That’s probably why we do this hoax holiday only once a year. Important factual news announced on April 1st is automatically doubted by the public, wary of being fools caught unawares. We keep ourselves at a distance lest we fall into some emotional trap and look silly (even as we quietly desire the silliness). Distancing is normal behavior we employ all the time, but it’s especially pronounced when you’re keeping yourself constantly alert to trickery. Problems occur when this heightened skepticism affects our perception of serious stories we would otherwise give their accordingly serious consideration. We restrict receptiveness and compliance, which can incapacitate systems that rely on precise communication or timely cooperation.


    Illustrating the effects of this profound shift in our approach to news on April Fools’ Day, one need only look back at the stories that emerged after the last time this happened. On April 1st 2009, a school was almost burned to the ground in the town of Albertslund, Denmark because the fire department refused to believe that the news was true the first two times that people called to report the fire. Naturally the firemen, being normally helpful people, rushed to extinguish the flames after repeated communication attempts forced them to realize their mistake, and the school was fortunately not a total loss. Nonetheless, the anecdote is indicative that losing response time in a time-critical situation can have catastrophic consequences.

    Terrorizing your lighthearted day of puckishness a little more personally, one can easily imagine that the psychological caution we employ on April Fools’ Day acts as convenient cover for malicious pranksters. In another story from last year, on and around April 1st 2009 there was a great deal of American mainstream media attention concerning about a rapidly spreading computer worm (often cited as a variant of Conficker). Without knowing enough to assess the immediate danger of the virus, news outlets warned the public at the speed of panic, as news is wont to do.

    Unlike the Danish school fire, the danger of the worm was relatively minimal, especially in contrast to the slew of new viruses unleashed on the web everyday. Yet still alike the Danish incident, the ambiguity of the purported threat still led to overreaction. In the case of the fire department it was inattentiveness; in the case of the news outlets it was over-attentiveness, drowning out other more relevant news. Either extreme leads to neglect.

    Now that you know you’re playing the fool whether you like it or not, bear in mind the distinction between rational response and irrational response as you take in the day’s news this year. After all, it’s April 1st and everyone’s a bit foolish. So keep your jokes… practical.