View Full Version : GEEK OUT; For the Geek in All of Us

Sam Miguel
01-15-2016, 02:19 PM
Inside every Gameface member is a geek. So here is a thread to celebrate that, and you can also take your own geek discussions here.

Let us start off with something from Cracked, an institution that deifies Geek:

5 Classic Geek Debates That Were Settled a Long Time Ago

By Luke McKinney | January 14, 2015 | 961,859 views

The Internet is an infinite playground filled with imagination, joy, and childish assholes screaming at each other. The freedom of online discussion has elevated nerd arguments into an art form. But, just like modern art, there are still too many psychopaths flinging shit and screaming under the assumption that it's all as important as real thought.

Two people arguing over which is the coolest Doctor aren't losers, because they're both doing something they enjoy. They have all the advantages of football fandom with far more source material, a much wider variety of pitches, and the entertaining advantage that their hobby doesn't continually cripple people.

Ridley Scott recently answered Blade Runner's eternal question of whether Deckard is a replicant. And if you don't want to know the answer, then whatever you do, don't remember that he thinks he's a human in a sci-fi movie entirely and only about how replicants can pass as human. And as Cracked's nerdiest nerd -- I not only understand questions like "How did Pacific Rim's Kaidonovskys survive?" but have strong feelings about them and provide multiple answers -- I've found five more eternal nerd questions with definite answers.

#5. Star Destroyer vs Enterprise

With the ridicuzillions of stars in the universe you'd think there would be enough to both trek to and war over, but no. Star Destroyer versus Enterprise has been the iconic nerd battle since we looked at the Enterprise's wonderful mission to find new life beyond the stars and asked, "Yeah, but how good are you at blowing shit up?"

The Star Destroyer redefined science-fiction cinema. Its first appearance taught us that far above the sky there is the awesomeness of space, and far above that there's an even cooler and larger spaceship.

The Enterprise is the image of advanced technology, humanity's best using its most advanced knowledge to learn even more. The Star Destroyer is the embodiment of irresistible bureaucracy: only an empire could afford to build something so insanely huge, and it's unstoppable despite being staffed with people so eminently replaceable that murdering the captain is how you add exclamation points to internal memos.

The resulting battle isn't just an action scene, it's an ideological debate. People have spent decades working out every possible angle of the conflict, from military tactics to technological interactions to raw energy outputs painstakingly extracted from freeze-frames of the source material.

Easy Answer: Enterprise

If your enemy has transporter technology and you don't, winning isn't one of your options. You get to choose between surrendering, exploding, or choking in space. Darth Vader can asphyxiate someone by raising his arm. Transporter Chief O'Brien can do the same thing by raising a few fingers and beaming them into space.

The Star Wars universe has never even heard of transporter technology, and so wouldn't have any defense against it. The Enterprise doesn't even need to beam photon torpedoes onto the Destroyer -- just remove any one of the million things the Destroyer needs to prevent itself from exploding and you're done. The movies established that Imperial technology is shorter-lived than Stormtrooper armor with a red undershirt.

Narratively, things are even worse for the Empire. The answer is in the "the" (and the italics reserved for proper craft names): the Enterprise is a named hero, while Star Destroyers are nameless minions. Not one in the movies has a title or a victory. Named hero versus minion only ever goes one way.

(Consolation prize: the Star Destroyer hasn't been in nearly as many terrible movies.)

#4. Sega vs Nintendo

In the '80s, Sega versus Nintendo was the Sharks versus the Jets, but on school playgrounds, and with even less convincing physical violence.

We loved them no matter how cruelly or often they killed us every night. This was back when liking video games was less socially acceptable than stealing panties, which at least demonstrated physical ability, interest in sex, and several possible career skills. But instead of getting together to discuss games, Genesis and SNES kids would mock each other's almost identical passions. It's the purest example of capitalism fueling conflict among minorities who really should have been working together against wider social problems.

Easy Answer: Nintendo

Sega would go on to release more failed machines than a Skynet factory, and they were even less popular with the end users. The Sega CD, 32X, Saturn: they couldn't have eliminated childish joy faster if they'd released a cerebral bore trying to get your credit card number.

I was a Nintendo fanboy, and I used to say that Nintendo could release a console that only they made games for and still succeed. I wasn't expecting them to actually test the theory, but with the Wii they've done exactly that. And made zillions of dollars. They're making so much money from so few intellectual properties, they threw the best one away just to show that they could. To say nothing of their terminal ignorance of F-Zero.

As kids we couldn't even dream of Mario and Sonic on the same console. Now we have Mario & Sonic at the Olympic Winter Games on the Wii.

That's not just Sonic working with Mario -- that's Sonic crashing on Mario's couch because he's got nowhere else to go.

#3. Is Cobb Awake at the End of Inception?

The ending of Inception could have annoyed more people only if Dominic Cobb had spawned a mafia family, gone to a restaurant, and it suddenly cut to black. The movie ends with Cobb finally returning home to see his children. When he spins his totem top to find out if he's truly awake or trapped in a wonderful dream, he walks off to see his children without waiting to find out. And lo, the Internet went crazy.

Is he awake? Is he still dreaming? People went to insane lengths, like tracking down the movie's costume director, to find out if the children are wearing the same clothes as in previous dream sequences.

Easy Answer: Awake

This answer has multiple levels, and is therefore perfect for the movie. The first level is that we're not meant to know for sure whether he's awake or not. But the Internet reacts to ambiguity the same way Star Trek computers react to ambiguity -- screeching and violent explosions. The second level -- from the director himself -- is that the real answer is that Cobb doesn't care anymore; he just wants to see his kids. But that's garbage. If he truly cares about his kids, it's even more important that he knows that he's awake, otherwise those aren't his kids. And if he's prepared to accept a dream, he should have done that back before he manslaughtered his wife by mind-abusing her until she committed suicide. Oh, and watching her go insane for months without going back into her dreams to fix his spiritual sabotage, which is his only skill -- Jesus, Cobb.

Luckily, we can go deeper, and on the third level he's obviously awake. The camera cuts away just as the top starts to wobble. Which means he's awake! In the dreams, that top spins like a laser-aligned black hole. It won't stop before the universe ends or the dreamer dies, that being its exact described function. When we see it in a definite dream (you know, when he violates his wife's brain without her knowledge or consent), it spins perfectly.

So if it wobbles, he's awake. Easy. This theory is less about the mysteries of the movie and more about the effects of the Internet on people's intelligence. It's no longer enough to work out an answer -- they want to be told an answer from a definite source. But the whole point of the brain is working things out, not just referencing sources. Referencing sources is the Internet's job, not ours.

Sam Miguel
01-15-2016, 02:21 PM
^^^ (Cont'd )

#2. The Best Zombie Weapon

The best zombie weapon is one of the eternal nerd questions. It's equal parts fantastic and pointless -- even if it was based in reality and wasn't an obvious daydream about being a sociopath, fantasizing about the best weapon during a zombie outbreak is like picturing your best possible underwear during an orgy. They might be briefly useful, but most of the time your attention is going to be spent on other things.

I've already explained why the most popular choices are wrong. Guns are useless unless you're carrying enough ammunition to build a castle to sleep in. Melee weapons are worse, because you can't dismantle an undead body with a crowbar without it at least slightly damaging your living (and infinitely more vulnerable) body. So now you're either infected or that fatal bit slower for the next attack.

Easy Answer: Lightsaber

To finish this unkillable debate once and for all: The best anti-zombie weapon is the lightsaber. Light, portable, near-silent, apparently infinitely powered, and never again will there be a single second's bullshit of frantically shaking padlocks/chains/handles/keypads while the hordes close in. It's the ultimate combination of lock pick and can opener.

In fact, zombies are the only enemy on which the lightsaber would actually work as a weapon. It would take real enemies one fight before they started using shotguns, flamethrowers, or more-than-one-person-firing-simultaneously.

With the unlearning undead, you can spin and carve through the hordes with total impunity. It even works to amputate and cauterize infected limbs. In fact, it's the only way to hack up infectious-fluid-filled meatbags at close range that isn't insane.

And don't give me any shit about it not being a real weapon. We're fighting zombies! If we're allowed to ignore enough laws of physics and biology to have zombies shambling around, a little bit of energy projection is nothing. Hell, thermodynamics will loan me that much energy just to clean up this appallingly impossible undead mess.

#1. Superman vs Batman

Superman versus anyone is always fun, and always Superman. Superman versus Hulk? Hulk can match him for strength, but he can't fly, so just fling him into space. Superman versus Thor? If anyone is worthy to lift Mjolnir, Superman is worthy to lift Mjolnir, and we're back to Superman. Superman is the best. Which is why everyone loves it when Batman belts him right in the kisser.

Superman versus Batman is a conflict we've already enjoyed, but this time we're not talking about story potential or effects on continuity. The question is: who would win in a fight?

Easy Answer: Superman

Obviously. Obviously. The reason all the arguments about Batman winning are so cool is that it's impossible but he's going to try anyway. Even his most awesome victory ...

... is only because Superman agrees not to kill him from space, meets him exactly where Batman politely asks, agrees not to kill him from space, gives Batman the first five shots for free, agrees not to kill him from space, and has recently lost most of his power in a little spot of saving the world from a nuclear mega-warhead. And even then he lets Batman beat himself to death while thumping Superman, then stands up as if to say, "OK, got that out of your system?"

When someone can literally kill you as soon as he looks at you, your only hope is to murder him before he even knows you're there. And since neither of them will kill, that's it for Bats. Superman could cauterize Bat's limbs off and dump him in a vat of life-sustaining bio gel before Batman could reach his first pouch.

That's the point of Batman, being able to have a go no matter how impossible the challenge. And that's the point of Superman, NOT instantly killing anyone who becomes a problem. Their ideologies are captured by (and are guaranteed to result in) this ass-kicking. That's why they'll keep fighting, and that's why they'll keep coming up with awesome new ways for Bruce to beat back the invincible, and that's why it'll keep being awesome despite being an obvious conclusion. Because the point of all these questions is the fun debate, not a definite answer.

Sam Miguel
01-15-2016, 02:26 PM
6 Insanely Valuable Real Treasures (And How to Steal Them)

By Ben Denny | June 20, 2012 | 851,030 views

Heist movies such as Ocean's Eleven and The Italian Job like to present the world as a loose network of heavily guarded treasures, just waiting for you and your ragtag yet likeable bunch of henchmen to pocket them.

And you know what? The real world is exactly like that, too. There's loot scattered all over the world, just begging for a charming gentleman thief and his plan that is so insane that it just ... might ... work.

#6. The Great Pyramid's Secret Chambers

The Great Pyramid of Giza is easy to brush off as old news when it comes to heisting. After all, it's been there for a long time, and almost every chamber has already been emptied 10 times over.


There are still some interesting discoveries to be made. For instance, there are two tiny shafts that climb up from the queen's chamber. They are far too small for a person to climb up and end in strange blocks with metal rings embedded in them. A secret treasure chamber, perhaps?

Researchers drilled through the block at the end of the shaft in 2002 and indeed found an honest-to-goodness secret chamber. Treasure hunting wise, though, it was the most disappointing Easter egg in the history of ransacking cultural landmarks -- it boasted some ancient red graffiti, a dummy door and absolutely nothing worth stealing whatsoever.

However, the other secret chamber is a different matter. There's another secret passageway behind the king's burial chamber, and this one is big enough for a man to go through:

Those four stones in the center are carefully placed in a zigzag pattern to take the load of the stones above off them. Experts are fairly certain there are hidden chambers behind this portico structure, and radar probing seems to back this up.

What's more, what we currently think of as the King's Chamber of the pyramid might have been just a dummy, and that hidden room may be the actual inner sanctum. Who knows what riches it might contain?

However ...

Apart from the usual warnings about the curse of the mummy and how really unbelievably bad the Egyptian prisons are should you get caught looting, there's a much more practical problem: sand.

When the wall in the King's Chamber was examined with a microgravity probe in 1986, they found that, although there likely is a secret chamber, it's completely filled with sand. It makes sense, really -- as anyone who's ever been in a sand desert can testify, that shit gets everywhere.

So unless your heist plan includes an army of DustBuster-wielding French maids, pyramid raiding might end up being more trouble than it's worth.

#5. The Vatican Library and Secret Archives

Take one of the oldest, most historically wealthy global forces in existence. Add in uncountable numbers of history-obsessed minions, centuries of time and a heavy penchant for hoarding.

The end result is a collection of artifacts and valuables so big, it takes the basement level of an entire country to store. You store your change in a piggy bank, they store theirs in every piggy bank that has ever existed.

The Vatican Library is one of the oldest museums in the world, and probably the most extensive in existence: It holds 1.6 million books, 180,000 manuscripts, 300,000 rare coins and medals and 150,000 assorted prints, drawings and engravings. We're not talking about paperback copies of The Hobbit and the novelization of Air Bud here; these are rare and valuable items from all corners of history. Even discounting the scholarly value, a whole lot of this stuff would be enough to keep the black market in a bidding contest for years.

And if you need more than mere money to float your boat, there are always the Vatican Secret Archives.

Founded in 1610, it's a historic library chock-full of documents, church records and intrigue. The Secret Archives span a cool 52 shelf miles of records. Only a handful of researchers are allowed (limited) access, and only a laughably tiny fragment of the material is available to the public. Nobody but a handful of specially selected clergymen truly know the contents of the archives.

So what lost secrets do the archives hold? Apocrypha that hold secrets capable of changing the world as we know it, Dan Brown style? The recipe for a Highlander serum? A bitchin' collection of solid gold pope hats? The world may never know ... unless, of course, someone sneaks in and helps reveal everything.

However ...

Two words: Swiss Guard.

The Swiss Guard is the oldest standing army in the world, with a 500-year tradition of service to the Vatican. To apply for a position, a prospect needs to be (obviously) Catholic and Swiss, have military experience and pass a rigorous interview and evaluation process. These hard, hard men volunteer to protect the most important physical part of their religion and, if necessary, sacrifice their lives for it.

And they do it despite knowing full well that they're required to dress like this:

So, by all means, go liberate a few documents from the Vatican. All that stands between you and newfound riches is a battalion of expertly trained, determined, halberd-wielding Rambo clowns.

#4. J.D. Salinger's Secret Book Vault

The Catcher in the Rye, J.D. Salinger's 1951 masterpiece, is one of the most famous books in history, inventing the teen angst novel a full 54 years before Stephenie Meyer was able to ruin the genre. Maybe Salinger saw that coming, because in 1953 he ran away to rural New Hampshire for no apparent reason. He stopped publishing his work (save for a few carefully selected short stories) and refused all interviews or photographs. Instead, he preferred to practice acupuncture on himself, tan a whole bunch and enjoy a sample platter of as many weird religions as possible. Salinger stayed in his remote cabin until his death in 2010, systematically fighting all things related to his "revered author" status to the very end.

But here's the thing: He never stopped writing.

According to his neighbors and children, Salinger wrote his ass off for several hours a day, every day for the past half century. He had reportedly finished up to 15 novels before his death.

Fifteen unpublished novels. From one of the greatest authors of all time.

Although Salinger never showed them to anyone and kept them in a secret stash, we pretty much know their location, too -- they're in either a vault at his bank (easy peasy!) or his own personal safe (hahahahaha!).

However ...

What if it's all drivel?

While there's a fair chance this vault might hold the greatest works of literary genius ever written, there is the worrying fact that Salinger's short stories were getting weirder and weirder toward the end. The man himself certainly did. His brand of reclusiveness was always of the Howard Hughes variety, allegedly drinking his own piss and driving his loved ones insane with his eccentric behavior. These aren't generally considered good signs for a writer whose notoriety lies in his ability to graft grippingly realistic coming-of-age stories.

So while this particular treasure is probably the easiest on this list to reach, just remember to brace yourself for the possibility that your finds will be less about transcending adolescence and more about the distinctly different flavors of Tuesday and Friday urine.

Sam Miguel
01-15-2016, 02:28 PM
#3. The Smithsonian Vaults

If American history is your bag, there's no reason to burgle further than the Smithsonian's National Museum of American History. It holds virtually all cornerstones that constitute America, such as the original Ruby Slippers from The Wizard of Oz, the top hat Abraham Lincoln was shot wearing and Teddy Roosevelt's desk.

The Smithsonian Museum has an estimated 3.2 million such pieces, and more and more drift in every year.

More importantly, from the viewpoint of your gang of gentleman thieves: A good chunk of their collection also drifts away.

An audit of 2,216 items in the Smithsonian's inventory found that a full 10 percent of audited items were missing. This wasn't just random stuff -- 33 missing objects were rated "Tier 4," meaning they're worth at least a million dollars each. If that pattern holds over the entire collection, that means that the Smithsonian has lost track of nearly $48 billion in priceless American history.

However, it's not because of thieves. The Smithsonian folk just suck at cataloging. It's not really their fault: Proper cataloging and labeling takes loads of man hours and expertise to do properly, and both things cost money. And as their rich patrons find it difficult to brag about how much labeling they've paid for, most of the donations the Smithsonian receives are earmarked for the sexier exhibitions, instead of basic grunt work. Lacking the budget, the institute therefore doesn't have a lot of options besides locking the storage doors every night and hoping to get their books in order in some distant future.

For a prospective burglar, this means their vaults are essentially a giant, uncataloged jumble sale -- only instead of a $50 sweater for $2, he may find Custer's jacket for the price of a crowbar.

However ...

By stealing from Smithsonian vaults, you are also grave-robbing Theodore Roosevelt.

As we are fond of pointing out, you don't particularly want to screw with Teddy. As it happens, the Smithsonian was one of his most beloved pet projects, and he supported it his entire life.

So while you might think that you will have years to enjoy the original Teddy bear before anybody comes looking for it, you are really just begging for Roosevelt's ghost to re-enter the mortal plane with the sole intent of dick-punching you to oblivion.

#2. The Trivandrum Temple

You know how sometimes you're going through your old stuff and find, say, an old comic that's now a collector's item? That's what happened to the Hindu temple of Sri Padmanabhaswamy in Trivandrum (which we will be calling "Trivandrum temple" from now on, because come on).

Only instead of a relic of your childhood that's now worth $22, they found some relics of their own. Worth $22 billion.

In 2011, the temple's basement was found to contain six vaults, which in turn contained offerings for the gods, gathered there over the last five centuries or so. Five of the vaults were full of gold, jewels and precious artifacts, sealed and left to their own devices. The sixth vault remains unopened.

Understandably, the find is causing a lot of trouble. The royal family, who preside over the temple, feels it belongs to them. The government wants a cut. The people of India, on the other hand, consider it public property and would very much like it back in the hands where it belongs (i.e., theirs).

And during all this quarreling, the treasure is staying right where it is. Untouched. Waiting.

However ...

The officials say that the Trivandrum temple area is heavily guarded by deliberately undisclosed security measures, which, according to our pop-culture-based expertise in treasure hunting, means it has either giant stones chasing you in narrow corridors or pits filled with venom-spewing scorpions.

What's more, the five vaults that have been opened are at least partially cataloged, so stealing even one tiny little golden statue might get you chased down by an entire nation.

As for that sixth vault ... the door is fitted with special locks that are far superior to the other five, and emblazoned with the figure of a snake. A member of the royal family has stated that opening it would be "a bad omen." And in the context of robbing Hindu temples, it seems like opening the ominous snake door might be tempting fate just a little too much ...

... is exactly what a coward would say. But that's not you, right? Right! So go ahead and kick open that vault, you fucking treasure-eating animal!

#1. The Federal Reserve Bank of New York

In the heart of New York City, there is a bank building constructed with one message in mind: "Fort Knox is for pussies."

The Federal Reserve Bank of New York is a 22-story stone monster so ridiculously oversized and over-protected that it seems less like a bank and more like Scrooge McDuck's money bin. In fact, it contains a money bin -- a hall three stories high and the size of a football field, filled with cash money. However, we're not setting a foot in there, as that particular vault is infested with robots.

Besides, the premises feature a much more enticing target. The N.Y. Fed plays home to the largest gold repository in the world, holding an estimated 25 percent of gold. All gold. In the whole world.

That's nearly $200 billion in solid goddamn gold bricks. Its disappearance should be more than enough to crash the world's economy twice over, if you're into that sort of thing.

That's the prize. Here's what you need to go through to get it.

First of all, digging in is not an option, unless you have access to some serious mining equipment. The vault sits 80 feet below street level, and the rock that surrounds it is hardcore -- the bedrock under New York is one of the very few foundations hard enough to hold the weight of this kind of construct. Walking in by means of some convoluted Ocean's Eleven scheme is no easier: The only entrance is a narrow 10-foot hallway cut into a 90-ton cylinder, which in turn lies in a 140-ton steel and concrete frame. The cylinder rotates 90 degrees to close each night, and its doors seal the vault airtight by sinking into the framework.

Forget bribing someone from the inside to open the vault on command, too. The entrance cylinder is controlled by a series of locks that can only be unlocked with the cooperation of several different people, at specific times of day.

So, a perfect challenge! Just grab your experimental military-grade Spider-Man boots and that invisibility armor you were stuck with after the Luxembourg gig and jump right in.

However ...

Even if you do manage to sneak in, defeat countless tons of steel, rock and concrete and then find a way to move the gold (reinforced skateboards? trained bees?), there's one last surprise for you: the elite army of marksmen.

Did we not mention the elite army of marksmen? Because there totally is one.

While security guards are always to be expected, these particular ones are no ordinary stormtroopers. The vault is protected by one of the largest private uniformed protection forces in America, and each and every one of them is contractually required to be able to shoot the dick off a fly.

You are now in the vault. The only way out is that tiny cylinder. And they are all waiting at the other end.

Man, you really should learn to read these articles in full before leaping into action.

01-25-2016, 03:16 PM
6 Futuristic Technologies That Are Huge Disappointments

By E. Reid Ross | January 24, 2016 | 284,773 views

We all understand that technological progress has its dark trade-offs. See: pollution, carpal tunnel syndrome, the fact that our telecommunications system has facilitated the ascendance of the Kardashian family as living gods. But given the cornucopia of newfangled doodads we're immersed in daily, we tend be less aware of when our gadgets start sucking a wee bit more. What products are we talking about? Well ...

#6. Smartphones Are Pretty Bad At Being Phones

We'll cut right to the quick. When it comes to audio quality for longer conversations, smartphones rank a rough third behind landlines and two cans connected by a string. The problem is that we're willing to accept this lack of clarity in favor of being able to slide our phones into our skinny jeans.

See, companies "often shrink, flatten, and cover speakers in plastic to improve their phones' overall functionality," which means that your general audio quality is being sacrificed for better performance of whatever Civilization V knockoff Kate Upton's boobs are selling now, as well as the ability to fit the damn thing in your hand. While there are scattered reports of new technologies on the horizon which promise to remedy the situation, there has been absolutely jack shit accomplished over the last couple of years.

Promises of distant solutions are one thing, but the truth is that manufacturers likely don't give a gurgling shit. They've predictably gone the more profitable route by concentrating more on making phones into Fruit Ninja-ing, genital-uploading mini-computers rather than enhancing their usefulness in regards to the original purpose Alexander Graham Bell intended for them.

Complaining about an innovation as wonderful as the smartphone can seem overly whiny, and we realize how wonderful it is to live in a time in which you can be virtually anywhere on Earth and stream any season of Friends on a pocket computer. It's just that on the off chance you need to dial 911 over a raccoon infestation, it'd help if your phone's audio quality ensured that the dispatcher could decipher your muffled screams.

#5. Car Knobs Are Way More Useful Than Touchscreens

As far as shapes go, circles are rather fantastic. This is why we've used them so often for things we need to rotate through. It's simply a quick turn from 97.5 WHOA to 98.1 WHAT on the radio, and all is right in the world.

However, starting around the time Steve Jobs got bored with his iPod's revolutionary clickwheel, touchscreens began to take over the world. What used to be an interactive museum gimmick was suddenly a regular part of our lives. But if customer feedback is to be believed, touchscreens suck.

It turns out that touchscreens can often be a whole lot less responsive than a good old-fashioned dial. Coupling this with the fact that the display setup is often a confusing jumble of boxes and text (not exactly a plus when you're trying to maneuver through rush hour traffic), our collective anger could rival Yosemite Sam trying to escape from a straightjacket. Irate customer feedback has prompted companies like Ford to bring knobs and buttons back to replace many of the annoying features of the touchscreens, as this automotive strategist explains: "Ford is making the change due to negative feedback they've received regarding several aspects of MyFord Touch. The system can be sluggish to the touch, while knobs and buttons obviously have a much quicker response. The four-quadrant system is also very text and information-heavy, making it overwhelming and confusing for some to do even simple tasks."

#4. Virtual Reality Is Making Games Stupider

Back in the '90s, headset gaming crashed and burned due to hardware that everybody assumed screwed up your eyes and, more importantly, crapsack games. Technology has since caught up with the concept, and we're ready for all the porn incredible adventures we could only have imagined before. The problem is that the novelty is going to wear off fast, and what we're left with is a gaming system that makes Mario Kart 64 look like the goddamned Mona Lisa.

That up there is a demo for Lucky's Tale, a third-person platforming game created by one of the people behind Words With Friends and designed solely to show off the Oculus Rift virtual reality headset. This would be cool if you the gamer were actually the main character, but you're really only hovering in space above it, like some kind of ghost that respects boundaries. There have been plenty of positive reviews, but we're also sure plenty of people like roller coasters after a 90-second ride, but not so much after riding in one for three hours straight.

What developers have found is that player-character mobility and exploration options will likely have to be scaled way back, since "excessive camera movements make the player sick." Lucky's Tale deals with this problem by forcing the player to engage with the environment in the most basic way: Going from one goal to the next via the exhilarating, singular option of traveling in a straight line.

As it stands, keeping things aggravatingly simple is something that all virtual reality games are likely going to have to do, if for no other reason than to keep their customer base from choking on their own puke. This is, of course, assuming that increasingly intricate VR doesn't utterly shitbox your brain, but that's a story for another article (which we already wrote).

01-25-2016, 03:18 PM
^ (Continued)

#3. 3D Printers Can Be Remarkably Unsafe

The invention of the 3D printer has been touted as the greatest breakthrough in mankind's never-ending quest to one day never have to go to a store again. However, keeping one in your house can be about as safe as adopting a family of stray mambas.

See, objects concocted in a 3D printer aren't smooth at all, and are in truth rife with microscopic nooks and crannies that are perfect for storing whatever manner of filth they come in contact with. Repeated physical contact with a 3D-printed object (you know damn well where we're going with this) can result in a veritable potluck of bacteria and viruses, and their porousness makes them more difficult to clean thoroughly.

To use an even more innocent example, say an unsuspecting "cool mom" used a 3D printer to make some cool-looking forks and spoons for her kids. Unfortunately, scientists would warn that she'd better be extremely careful about it, unless she wants to spend the rest of the day explaining herself to Child Protective Services. Making flatware that won't shred your mouth parts is technically possible, but you'd better pony up for some specialized material, unless you want to risk your children showing up at school looking like you've been putting powdered glass in their popsicles (while simultaneously poisoning them with toxic chemicals).

Last but certainly not least, merely having one of these printers in your house can give you the kind of symptoms that used to require decades of working maskless in an industrial plastics plant. See, the 3D printing process puts out plenty of toxic fumes when things heat up. An analysis revealed that it can fill the air with "ultra fine particles" that can cause a laundry list of ailments, such as "lung function changes, airway inflammation, enhanced allergic responses, altered heart rate and heart rate variability, accelerated atherosclerosis, and increased markers of brain inflammation," which you'll notice is about double the length of the warning label on a pack of Marlboros.

#2. College Students Freaking Hate E-Books

Given that you're reading this article on a screen instead of settling for the magazine that was left after all the copies of MAD were sold out, it's safe to say that digital words are the way of the future. Even academia has come around -- there are probably more college students today who have never seen the inside of a real book than ... wait, no. Can't be.

Shockingly, it turns out that the people whom you might expect to fully embrace the new technology -- college-age millennials -- prefer paper books. Heck, according to this survey, e-books only account for a measly nine percent of textbooks bought on campus, while 87 percent were in old-timey print.

Some people have theorized that perhaps it's a money issue, but while textbooks are still a monumental scam, modern students reportedly "prefer print for both pleasure and learning," and it's baffling the shit out of the people in academia who study this sort of phenomena for a living. Another survey, this one administered by Hewlett Packard to students at San Jose State, came up with similar findings. They found that when students were offered e-books free of charge, a quarter of them opted to pay cash money for the paper versions instead.

It seems that students tend to skim over things and find it harder to keep track of important sections while studying digitally, while print books make the whole process of earmarking and mentally cataloging information more efficient. A good illustration of this came from one student's response to a survey question which asked what was the worst part about reading a physical book: "It takes me longer because I read more carefully."

Finally, lest we think that today's college students are the last vanguards of paper books, whatever the hell we're calling today's children seem primed to carry the torch. In 2014, almost two-thirds of all schoolchildren said that they'll "always want to read books in print, even though there are e-books available". And that's up from 60 percent in 2012.

#1. Automatic Faucets Are Gross

It's still up in the air as to whether technology will ever succeed in making public bathrooms less disgusting, since human biological functions are inherently chock full o' poo. But by eliminating the need to touch the same handles that Coughy McSalmonella did, automatic faucets were supposed to be inherently safer. And they are, unless you count all the cases of Legionnaires disease, an infection that causes a 'roided-out form of pneumonia.

It doesn't seem to make much in the way of immediate sense, but electronic faucets have been found to be teeming with infection in hospital environments, and some facilities have begun to put the old versions back in place in order to save lives. As Johns Hopkins infectious disease expert Dr. Lisa Maragakis put it, "Newer is not necessarily better when it comes to infection control in hospitals."

So how is this possible, when we aren't even touching them? As it turns out, newer faucets have a "complicated series of valves" that are required for them to perform their magic, which also makes it very difficult to keep them clean. And because a janitor can't exactly flush the crap out of these faucets every time they're used, they become a breeding ground for all manner of transmittable filth. The moral of the story? Human innovation is basically a curse granted by an enchanted monkey's paw, and technology reached its zenith with the hoop and stick.

03-23-2018, 08:22 AM
Toys R Us founder dies days after chain’s announced shutdown

Associated Press / 07:06 AM March 23, 2018

NEW YORK - Charles P. Lazarus, the World War II veteran who founded Toys R Us six decades ago and transformed it into an iconic piece of Americana, died on Thursday at age 94, a week after the chain announced it was going out of business.

Toys R Us confirmed Lazarus’ death in a statement.

“There have been many sad moments for Toys R Us in recent weeks, and none more heartbreaking than today’s news about the passing of our beloved founder, Charles Lazarus,” the company said. “Our thoughts and prayers are with Charles’ family and loved ones.”

Lazarus, who stepped down as CEO of Toys R Us in 1994, transformed the toy industry with a business model that became one of the first retail category killers — big stores that are so devoted to one thing, and have such an impressive selection, that they drive smaller competitors out of business.

More recently, Toys R Us found itself unable to survive the trends of the digital age, namely competition from the likes of Amazon, discounters like Walmart, and mobile games. No longer able to bear the weight of its heavy debt load, the company announced last week that it would close or sell its 735 stores across the country, including its Babies R Us stores.

But for decades, it was Toys R Us that drove trends in child’s play, becoming a launch pad for what became some of the industry’s hottest toys.

Lazarus, the son of a bicycle store owner, modeled his business after the self-service supermarkets that were becoming popular in the 1950s, stacking merchandise high to give shoppers the feeling it had an infinite number of toys. The stores created a magical feeling for children roaming aisles filled with Barbies, bikes, and other toys arranged in front of them.

The chain has its roots in Children’s Bargain Town, the baby furniture store that Lazarus opened 1948 in his hometown of Washington, D.C. He began selling toys after a couple of years when customers began asking for them, and he quickly concluded that, in the baby-boom years, toys were a more lucrative business than furniture.

He opened his first store dedicated to selling only toys in 1957, calling it Toys R Us with the “R” spelled backward to give the impression that a child wrote it. Shopping carts stood ready for customers to grab and fill up themselves, supermarket-style.

In 1965, Geoffrey the Giraffe became the company’s mascot, appearing in his first TV commercial in 1973. By the 1980s and early 90s, Toys R Us dominated the toy-store business and its jingle, “I’m a Toys R Us kid” became an anthem for children across the country.

In 1992, Lazarus traveled with President George H.W. Bush for the opening of the first Toys R Us in Japan.

He himself loomed large over his industry at the heyday of his company, personally traveling to the annual Toy Fair in Manhattan. Thousands of buyers from around the world attend but back then, it was Lazarus whom manufacturers were most anxious to impress, said Marc Rosenberg, a veteran toy marketer and founder of SkyBluePink Concepts.

The opportunity to give “Mr. Lazarus” a tour of your showroom was right of passage for marketers, said Rosenberg, who first met Lazarus on such an occasion in 1987 as marketer for Tiger Electronics.

Lazarus walked through the showrooms giving feedback on the playthings arrayed before him, trailed by a group of employees feverishly taking notes on his every word, Rosenberg said.

“As a young marketing guy, if Charles Lazarus liked something you were doing, it was like the greatest thing in the world,” Rosenberg said. “He had a dry sense of humor. If he liked something, he would show it. He would laugh but it wasn’t easy to get him to laugh.”

Rosenberg said Lazarus understood that the success of Toy R Us stemmed from creating a “circus-like atmosphere to keep kids wanting to come back every week.”

Geoffrey the Giraffe soon started a family, with wife Gigi, and a son and daughter. The giraffe family would make regular visits to the stores, parades, and other events.

Rosenberg said the company cut down on such events after Lazarus left and struggled to find a model that could help it compete with the likes of Walmart, which offered a similar selection at lower prices; and later, Amazon.

Lazarus, who was born on October 4, 1923, was inducted into the Toy Industry Association’s Hall of Fame in 1990. /kga

03-23-2018, 08:24 AM
Stephen Hawking: ’His laboratory was the universe’

Associated Press / 09:59 PM March 14, 2018

WASHINGTON — Everyone knew of Stephen Hawking’s cosmic brilliance, but few could comprehend it. Not even top-notch astronomers.

Hawking, who died at his home in Cambridge, England, on Wednesday at age 76, became the public face of science genius. He appeared on “Star Trek: The Next Generation” and “The Big Bang Theory,” voiced himself in “The Simpsons” cartoon series and wrote the best-seller “A Brief History of Time.”

He sold 9 million copies of that book, though many readers didn’t finish it. It’s been called “the least-read best-seller ever.” Hollywood celebrated his life in the 2014 Oscar-winning biopic “The Theory of Everything.”

In some ways, Hawking was the inheritor of Albert Einstein’s mantle of the genius-as-celebrity.

“His contribution is to engage the public in a way that maybe hasn’t happened since Einstein,” said prominent astronomer Wendy Freedman, former director of the Carnegie Observatories. “He’s become an icon for a mind that is beyond ordinary mortals. People don’t exactly understand what he’s saying, but they know he’s brilliant. There’s perhaps a human element of his struggle that makes people stop and pay attention.”

With Einstein, most people are familiar with e=mc2, but they don’t know what it means. With Hawking, his work was too complicated for most people, but they understood that what he was trying to figure out was basic, even primal.

“He was asking and trying to address the very biggest questions we were trying to ask: the birth of the universe, black holes, the direction of time,” said University of Chicago cosmologist Michael Turner. “I think that caught people’s attention.”

And he did so in an impish way, showing humanity despite being in a wheelchair with ALS, the degenerative nerve disorder known in the U.S. as Lou Gehrig’s disease. He flew in a zero-gravity plane. He made public bets with other scientists about the existence of black holes and radiation that emanates from them — losing both bets and buying a subscription to Penthouse for one scientist and a baseball encyclopedia for the other.

“The first thing that catches you is the debilitating disease and his wheelchair,” Turner said. But then his mind and the “joy that he took in science” dominated. And while the public may not have understood what he said, they got his quest for big ideas, Turner said.

Andy Fabian, an astronomer at Hawking’s University of Cambridge and president of the Royal Astronomical Society, said Hawking would start his layman’s lectures on black holes with the joke: “I assume you all have read ’A Brief History of Time’ and understood it.” It always got a big laugh.

“You’d find the average astronomer such as myself doesn’t even try to follow the more esoteric theories that (Hawking) pursued the last 20 years,” Fabian said. “I’ve been to talks Hawking has given and cannot follow them myself.”

Hawking, who was born 300 years to the day after Galileo died, was the Lucasian Professor of Mathematics at Cambridge University. It was the same post that Isaac Newton held. Both physicists and astrophysicists claimed him as their own. And much of Hawking’s work was in the field of cosmology, a deep-thinking branch of astronomy that tries to explain the totality of the universe.

Hawking’s title “is not relevant here; what matters is what his brain did,” said Neil deGrasse Tyson, director of New York’s Hayden Planetarium. “We claim him as an astrophysicist because his laboratory was the universe.”

And Hawking’s black hole work in the mid-1970s made a crucial connection in physics. Until Hawking discovered radiation coming from black holes — named “Hawking radiation” after him — the two giant theories in physics, Einstein’s general relativity and quantum mechanics, often conflicted. Hawking was the first to show they connected, which Turner and others described as breakthrough at the time.

The concept that stuff, radiation, comes out of black holes may have upset science fiction authors, but it inspired young scientists such as Tyson, who described it as “spooky profound.”

The idea behind this was also novel because it said “black holes aren’t forever,” Turner said.

Hawking also pioneered a “no hair” theory of black holes that they were simple, with just spin, mass and charge and nothing else.

Both of those concepts are cornerstones of current black hole theory.

Hawking’s other work went beyond black holes into the more cosmic, the origins of the universe. Initially he theorized about the “singularity” of the baby universe in thick but elegant mathematical equations comparing early time to wave functions. Later, his own work contradicted some of that and he was instrumental to theories about inflationary cosmology, where the universe’s beginning is more of a half ball. That theory got its kick-start at a conference Hawking hosted in 1982 with a dinner party and croquet match, Turner said.

The high-concept theory-making didn’t quite match the personality behind it. Colleagues often mention his off-the-wall humor, his big grin, his stubbornness.

And even the public picked up on his cheeky attitude instantly, Turner and Freedman said.

“He added a human face to science,” Turner said. “It goes well beyond the wheelchair.”

The bigger story was how the public became fascinated with this small man, stuck in a wheelchair with a worsening disease, and an intellect that few could fathom. They related to the man, Stephen Hawking, and his story, Freedman said.

The insight he gave on the mysteries of the cosmos was just a bonus.

03-28-2018, 07:44 AM
From The Straight Dope ...

An airplane taxies in one direction on a moving conveyor belt going the opposite direction. Can the plane take off?

February 3, 2006

Dear Cecil:

Please, please, please settle this question. The discussion has been going on for ages, and any time someone mentions the words "airplane" or "conveyor belt" everyone starts right back up. Here's the original problem essentially as it was posed to us: "A plane is standing on a runway that can move (some sort of band conveyer). The plane moves in one direction, while the conveyer moves in the opposite direction. This conveyer has a control system that tracks the plane speed and tunes the speed of the conveyer to be exactly the same (but in the opposite direction). Can the plane take off?"

There are some difficulties with the wording of the problem, specifically regarding how we define speed, but the spirit of the situation is clear. The solution is also clear to me (and many others), but a staunch group of unbelievers won't accept it. My conclusion is that the plane does take off. Planes, whether jet or propeller, work by pulling themselves through the air. The rotation of their tires results from this forward movement, and has no bearing on the behavior of a plane during takeoff. I claim the only difference between a regular plane and one on a conveyor belt is that the conveyor belt plane's wheels will spin twice as fast during takeoff. Please, Cecil, show us that it's not only theoretically possible (with frictionless wheels) but it's actually possible too.

Berj A. Doudian, via e-mail

Cecil replies:

Excuse me — did I hear somebody say Monty Hall?

On first encounter this question, which has been showing up all over the Net, seems inane because the answer seems so obvious. However, as with the infamous Monty-Hall-three-doors-and-one-prize-problem (see The Straight Dope: “On Let’s Make a Deal” you pick Door #1, 02-Nov-1990), the obvious answer is wrong, and you, Berj, are right — the plane takes off normally, with no need to specify frictionless wheels or any other such foolishness. You’re also right that the question is often worded badly, leading to confusion, arguments, etc. In short, we’ve got a topic screaming for the Straight Dope.

First the obvious-but-wrong answer. The unwary tend to reason by analogy to a car on a conveyor belt — if the conveyor moves backward at the same rate that the car’s wheels rotate forward, the net result is that the car remains stationary. An aircraft in the same situation, they figure, would stay planted on the ground, since there’d be no air rushing over the wings to give it lift. But of course cars and planes don’t work the same way. A car’s wheels are its means of propulsion — they push the road backwards (relatively speaking), and the car moves forward. In contrast, a plane’s wheels aren’t motorized; their purpose is to reduce friction during takeoff (and add it, by braking, when landing). What gets a plane moving are its propellers or jet turbines, which shove the air backward and thereby impel the plane forward. What the wheels, conveyor belt, etc, are up to is largely irrelevant. Let me repeat: Once the pilot fires up the engines, the plane moves forward at pretty much the usual speed relative to the ground — and more importantly the air — regardless of how fast the conveyor belt is moving backward. This generates lift on the wings, and the plane takes off. All the conveyor belt does is, as you correctly conclude, make the plane’s wheels spin madly.

A thought experiment commonly cited in discussions of this question is to imagine you’re standing on a health-club treadmill in rollerblades while holding a rope attached to the wall in front of you. The treadmill starts; simultaneously you begin to haul in the rope. Although you’ll have to overcome some initial friction tugging you backward, in short order you’ll be able to pull yourself forward easily.

As you point out, one problem here is the wording of the question. Your version straightforwardly states that the conveyor moves backward at the same rate that the plane moves forward. If the plane’s forward speed is 100 miles per hour, the conveyor rolls 100 MPH backward, and the wheels rotate at 200 MPH. Assuming you’ve got Indy-car-quality tires and wheel bearings, no problem. However, some versions put matters this way: “The conveyer belt is designed to exactly match the speed of the wheels at any given time, moving in the opposite direction of rotation.” This language leads to a paradox: If the plane moves forward at 5 MPH, then its wheels will do likewise, and the treadmill will go 5 MPH backward. But if the treadmill is going 5 MPH backward, then the wheels are really turning 10 MPH forward. But if the wheels are going 10 MPH forward . . . Soon the foolish have persuaded themselves that the treadmill must operate at infinite speed. Nonsense. The question thus stated asks the impossible — simply put, that A = A + 5 — and so cannot be framed in this way. Everything clear now? Maybe not. But believe this: The plane takes off.

03-28-2018, 08:01 AM
On “Let’s Make a Deal,” you pick Door #1. Monty opens Door #2 — no prize. Do you stay with Door #1 or switch to #3?

November 2, 1990

Dear Cecil:

I was perversely flipping through the Parade section of my Sunday newspaper when I stumbled upon Marilyn vos Savant's "Ask Marilyn" column. Even more perversely, I read it. It wasn't a total loss, though, because it appears she made another mistake, even worse than the one you pointed out in a very entertaining column a few months ago. Here's the question:

Suppose you're on a game show and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?

ANSWER: Yes; you should switch. The first door has a one-third chance of winning, but the second door has a two-thirds chance. Here's a good way to visualize what happened. Suppose there are a million doors, and you pick door No. 1. Then the host, who knows what's behind the doors and will always avoid the one with the prize, opens them all except door #777,777. You'd switch to that door pretty fast, wouldn't you?

Correct me if I'm wrong, Cecil, but aren't the odds equal for the remaining doors — one in two?

P.S.: If the questions she answers are any indication of the intellect of the general population, this country is in a lot of trouble.

Michael Grice, Madison, Wisconsin

Cecil replies:

This is getting ridiculous. You’re perfectly correct. If there are three doors your chances of picking the right one are one in three. Knock one out of contention and the chances of either of the remaining doors being the right one are equal — one in two. This business about a million doors is a bit of pretzel logic that maybe only somebody with the world’s highest IQ (according to Guinness, anyway) can properly appreciate. Parade’s editors really ought to read the copy before they put it in the magazine.


Dear Cecil:

I certainly won’t be the only one to catch your latest error, but the perverse joy I get pointing it out offsets the small chance this letter has of being printed. I refer to your answer to Michael Grice’s question about the game show conundrum — one prize, three doors, you pick Door #1, the host opens Door #3 to reveal no prize. Should you switch to the remaining door or stick with your original choice? You agreed with Grice that the odds of winning are equal for both — one in two. Wrong! The easiest route to the truth is to notice that resolving never to switch is equivalent to not having the option to switch, in which case, I’m sure you’ll agree, the odds of winning remain one in three. Switching, therefore, has a two-thirds chance at the prize.

Your mistake was not realizing that opening Door #3 tells you more about Door #2 than about the door you originally picked. The reason for this is subtle. The host, in picking Door #3, does not choose from the full set of doors but rather from the subset of doors you did not pick. Each subset’s probability of winning does not change but the probability for a particular door in the second subset does. If you don’t get it find a friend who looks like Monty Hall and play 20 rounds. It will soon become obvious which strategy wins most often.

— Robert E. Johanson, Chicago

Cecil replies;

Hmm. I’ll admit I wasn’t paying much attention when I wrote that column, assumed this was another instance of carelessness on Marilyn vos Savant’s part, and fell into a sucker’s trap. But now that I’ve had a chance to study the matter, it’s apparent there is a subtlety that you eluded you as well.

First, though, I feel obliged to eat a bit of crow. The “common sense” answer, the one I gave, is that if you’ve got two doors and one prize, the chances of picking the right door are 50-50. Given certain key assumptions, which we’ll discuss below, this is wrong.

Why? A different example will make it clear. Suppose our task is to pick the ace of spades from a deck of cards. We select one card. The chance we got the right one is 1 in 52. Now the dealer takes the remaining 51 cards, looks at them, and turns over 50, none of which is the ace of spades. One card remains. Should you pick it? Of course. Why? Because (1) the chances were 51 in 52 that the ace was in the dealer’s stack, and (2) the dealer then systematically eliminated all (or most) of the wrong choices. The chances are overwhelming — 51 out of 52, in fact — that the single remaining card is the ace of spades.

Which brings me to the subtlety I mentioned earlier. Your analysis of the game show question is correct, Bobo, only if we make several assumptions: (1) Monty Hall knows which door conceals the prize; (2) he only opens doors that do NOT conceal the prize; and (3) he always opens a door. Assumptions #1 and #2 are reasonable. #3 is not.

Monty Hall is not stupid. He knows, empirically at least, that if he always opens one of the doors without a prize behind it, the odds greatly favor contestants who switch to the remaining door. He also knows the contestants (or at least the highly vocal studio audience) will tumble to this eventually. To make the game more interesting, therefore, a reasonable strategy for him would be to open a door only when the contestant has guessed right in the first place. In that case the contestant would be a fool to change his pick.

But that’s absurd, you say. If Monty only opened a door when you’d chosen correctly in the first place, no one would ever switch. Exactly — so it’s likely Monty adds one last twist. Most of the time he only opens a door when you’ve chosen correctly — but not always. In other words, he tries to bluff the contestants, then counterbluff them.

This strategy changes the odds dramatically. In fact, it can be shown that if Monty always opens a door when the contestant is right and half the time when he’s wrong — a perfectly rational approach — over the long haul the odds of the prize being behind Door #1 versus Door #2 are 50-50.

The lesson here is that probability isn’t the cut-and-dried science you might assume from high school math class. Instead it involves a lot of educated guesses about human behavior. I’ll admit I jumped to an unwarranted conclusion on this one. Don’t be too sure you haven’t done the same.

03-28-2018, 08:01 AM
^ Continued ...

The last you’ll ever have to read about this

Dear Cecil:

To beat the dead horse of Monty Hall’s game-show problem: Marilyn was wrong, and you were right the first time …

— Eric Dynamic, Berkeley, California

Dear Cecil:

You really blew it. As any fool can plainly see, when the game-show host opens a door you did not pick and then gives you a chance to change your pick, he is starting a new game. It makes no difference whether you stay or switch, the odds are 50-50.

— Emerson Kamarose, San Jose, California

Dear Cecil:

Suppose our task is to pick the ace of spades from a deck of cards. We select one card. The chance we got the ace of spades is 1 in 52. Now the dealer takes the remaining 51 cards. At this point his odds are 51 in 52. If he turns over 1 card which is not the ace of spades our odds are now 1 in 51, his are 50 in 51. After 50 wrong cards our odds are 1 in 2, his are 1 in 2. The idea that his odds remain 51 in 52 as more and more cards in his hand prove wrong is just plain crazy.

— John Ratnaswamy, Chicago; similarly from Greg, Madison, Wisconsin; Stuart Silverman, Chicago; Frank Mirack, Arlington, Virginia; Dave Franklin, Boston; many others

Cecil replies:

Give it up, gang. It was bad enough that I screwed this up. But you guys have had the benefit of my miraculously lucid explanation of the correct answer! Since you won’t listen to reason, all I can tell you is to play the game and see what happens. One writer says he played his buddy using the faulty logic in my first column and got skunked out of the price of dinner. Several other doubters wrote computer programs that, to their surprise, showed they were wrong and Marilyn vos Savant was right.

A friend of mine did suggest another way of thinking about the problem that may help clarify things. Suppose we have the three doors again, one concealing the prize. You pick door #1. Now you’re offered this choice: open door #1, or open door #2 and door #3. In the latter case you keep the prize if it’s behind either door. You’d rather have a two-in-three shot at the prize than one-in-three, wouldn’t you? If you think about it, the original problem offers you basically the same choice. Monty is saying in effect: you can keep your one door or you can have the other two doors, one of which (a non-prize door) I’ll open for you. Still don’t get it? Then at least have the sense to keep quiet about it.

Other correspondents have passed along some interesting variations on the problem. Here’s a couple from Jordan Drachman of Stanford, California: There is a card in a hat. It is either the ace of spades or the king of spades, with equal probability. You take another identical ace of spades and throw it into the hat. You then choose a card at random from the hat. You see it is an ace. What are the odds the original card in the hat was an ace? (Answer: 2/3.) There is a family with two children. You have been told this family has a daughter. What are the odds they also have a son, assuming the biological odds of having a male or female child are equal? (Answer: 2/3.)

Finally, this one from a friend. Suppose we have a lottery with 10,000 “scratch-off-the-dot” tickets. The prize: a car. Ten thousand people buy the tickets, including you. 9,998 scratch off the dots on their tickets and find the message YOU LOSE. Should you offer big money to the remaining ticketholder to exchange tickets with you? (Answer: hey, after all this drill, you figure it out.)

So I lied — This is the last you’ll ever have to read about this

Dear Cecil:

The answers to the logic questions submitted by Jordan Drachman were illogical. In the first problem he says there is an equal chance the card placed in a hat is either an ace of spades or a king of spades. An ace of spades is then added. Now a card is drawn from the hat — an ace of spades. Drachman asks what the odds are that the original card was an ace. Drawing a card does not affect the odds for the original card. They remain 1 in 2 that it was an ace, not 2 in 3 as stated.

In the second problem we are told a couple has two children, one of them a girl. Drachman then asks what the odds are the other child is a boy, assuming the biological odds of having a male or female child are equal. His answer: 2 in 3. How can the gender of one child affect the gender of another? It can’t. The answer is 1 in 2.

— Adam Martin and Anna Davlantes, Evanston, Illinois

Dear Cecil:

In a recent column you asked, “Suppose we have a lottery with 10,000 `scratch off the dot’ tickets. The prize: a car. Ten thousand people buy the tickets, including you. 9,998 scratch off the dots on their tickets and find the message `YOU LOSE.’ Should you offer big money to the remaining ticketholder to exchange tickets with you?”

If you think the answer is “yes,” you are wrong. If you think the answer is “no,” then you are intentionally misleading your readers …

— Jim Balter, Los Angeles

Cecil replies:

Do you think I could possibly screw this up twice in a row? Of course I could. But not this time. Cecil is well aware the answer to the lottery question is “no” — if there are only two tickets left, they have equal odds of being the winner. The difference between this and the Monty Hall question is that we’re assuming Monty knows where the prize is, and uses that information to select a non-prize door to open; whereas in the lottery example the fact that the first 9,998 tickets are losers is a matter of chance. I put the question at the end of a line of dissimilar questions as a goof — not very sporting, but old habits die hard.

The answers given to Jordan Drachman’s questions — 2 in 3 in both cases — were correct. The odds of the original card being an ace were 1 in 2 before it was placed in the hat. We are now trying to determine what card was actually chosen based on subsequent events. Here are the possibilities:

(1) The original card in the hat was an ace. You threw in an ace and then picked the original ace.

(2) The original card in the hat was an ace. You threw in an ace and then picked the second ace.

(3) The original card was a king; you threw in an ace. You then picked the ace.

In 2 of 3 cases, the original card was an ace. QED.

The second question is much the same. The possible gender combinations for two children are:

(1) Child A is female and Child B is male.

(2) Child A is female and Child B is female.

(3) Child A is male and Child B is female.

(4) Child A is male and Child B is male.

We know one child is female, eliminating choice #4. In 2 of the remaining 3 cases, the female child’s sibling is male. QED.

Granted the question is subtle. Consider: we are to be visited by the two kids just described, at least one of which is a girl. It’s a matter of chance who arrives first. The first child enters — a girl. The second knocks. What are the odds it’s a boy? Answer: 1 in 2. Paradoxical but true. (Thanks to Len Ragozin of New York City.)

Cecil is happy to say he has heard from the originator of the Monty Hall question, Steve Selvin, a UCal-Berkeley prof (cf American Statistician, February 1975). Cecil is happy because he can now track Steve down and have him assassinated, as he richly deserves for all the grief he has caused. Hey, just kidding, doc. But next time you have a brainstorm, do us a favor and keep it to yourself.

03-28-2018, 08:15 AM
How do airplanes fly, really?


July 12, 2005

Dear Straight Dope:

I'm a pilot and I work for an aircraft manufacturing company. Just like every other pilot and many non-pilots, I've learned that aircraft fly because of the low pressure created on the top of an airfoil. However, I'm not sure that's the whole story. I recently read a book called Stick and Rudder by Wolfgang Langewiesche, and he argues that the primary thing that keeps planes in the air is the downward force created by the wing--that the aircraft mostly pushes itself into the sky instead of pulling itself by the top of the wings. I've been thinking (perhaps too much) that he may be right. Perhaps lift is created by both effects. Maybe one is stronger than the other but I think that most people haven't been told the real story about what keeps an airplane in the air. It's got to be more than the low pressure on top of the wing. I want to know the real story.

Bill Rehm

aerodave replies:

You’d think that after a century of powered flight we’d have this lift thing figured out. Unfortunately, it’s not as clear as we’d like. A lot of half-baked theories attempt to explain why airplanes fly. All try to take the mysterious world of aerodynamics and distill it into something comprehensible to the lay audience–not an easy task. Nearly all of the common "theories" are misleading at best, and usually flat-out wrong.

You mention a couple concepts that provide part of the answer but are incomplete in fundamental ways. To hear some tell it, wings work due to angle of attack and little else. Angle of attack (AOA) is the difference between the orientation of a surface and the direction of the air flowing over it. If you hold your hand horizontally out the window of a moving car, and then start to point your fingers toward the sky, you’re giving your hand a positive AOA. That generates a significant amount of upward force on your arm. It makes sense that this same force should keep a plane in the air, and to some degree it does. A wing’s angle of attack helps it deflect air downward, and Newton’s Third Law–every action has an equal and opposite reaction–says this downward push on the air must result in an upward push on the wing. But there’s more to it, since the force generated by this deflection alone is far short of what we see in real life.

The problem is that the AOA explanation ignores the cross-sectional shape of the wing–the quintessential aerodynamic form known as the airfoil. In particular, it suggests that it doesn’t matter how the top of the wing is shaped, and that a thin flat plate would do just fine. Not so. Cambered airfoils–those with the characteristic upward hump that most wings have–can generate lift without any positive AOA at all, or even a slightly negative angle!

Of all the flawed explanations of why planes fly, the one most people have heard involves the Bernoulli principle and "equal transit times." The curved upper surface of the wing is longer than the bottom one, we’re told, and the air above the wing must therefore speed up to keep pace with its counterpart below. Start with this increased velocity on the top of the wing, mix in Bernoulli’s principle, and you find lower pressure above. This pressure differential sucks the airplane upward and voilà! A quarter-million pounds of jumbo jet stays aloft.

This logic is taught to students of all ages, and finds its way into many respectable science textbooks. Its only problem is its complete lack of resemblance to reality. A given "piece" of upper-surface air has no compelling reason to keep pace with its estranged sibling below; it can’t know and doesn’t care what’s happening on the other side of that aluminum partition.

That’s not to say this model totally misses the mark; it just underestimates the effect. The air really does accelerate over the top of the wing, but actually far outpaces the lower-surface air. It’s a good thing, too–if the air stayed in sync over the minuscule difference between the upper and lower paths, a Cessna 152 would have to fly at over 300 mph just to stay in the air! (In reality, it’s about 55 mph.)

By this point I’m sure you’re tired of hearing all the non-reasons airplanes fly and would like a real answer. Where does lift come from? One important fact is that fluids have a tendency to adhere to curved surfaces as they flow past. This is called the Coanda effect. The explanation for it is outside the scope of this article, so you’ll have to take my word for it or check out the references. The bottom line is that if you correctly shape the top of the wing, the air will keep hugging the surface as it passes.

The next step is realizing that when a fluid is made to curve, its pressure is "thrown" to the outside by centrifugal force (that’s the simplest way of describing it). As the air crests the hill at the top of the airfoil, it pulls away from the wing surface, creating a slight vacuum next to the metal. The pressure differential helps push the wing up. In addition, the wing causes the flow behind it to move downward. Some of the downward flow is caused by air deflected by the lower surface, and some comes from air that is forced down as it rounds the top. (That’s where the missing lift we mentioned in connection with the AOA explanation comes from!) In short, an airfoil generates lift up by deflecting the air flow down, a clear example of Newton’s Third Law. This illustration makes the notion of downward deflection a little clearer.

Is it wrong to invoke the Bernoulli principle when explaining how an airfoil works? No, but it tends to confuse matters for the non-technical audience. In fact, airfoils can be explained in two ways. I’ve just given the "Newton" explanation–a wing generates lift because it deflects the airflow. The "Bernoulli" explanation has to do with pressure differentials. For complex reasons we needn’t get into (other than to say they have little to do with equal transit times), air moves faster over the top of a wing than beneath it. Bernoulli’s principle tells us that the higher the velocity of an airstream, the lower its static pressure. Because the static pressure below the wing is higher than that above, the wing generates lift. If you don’t quite follow all that, don’t worry–the Newton explanation is perfectly adequate.

That’s the simple explanation of lift. Using that knowledge to design anything useful is a little messier. There’s a big set of equations that in principle can be solved to fully describe the behavior of a moving fluid. They’re called the Navier-Stokes equations, and they’ve got more Greek letters than the Athens phone book. (Check out this page to see what I mean.) When applied to anything approaching reality, they get so hairy they gag the world’s fastest computers. So engineers make lots of simplifying assumptions to produce results that are "good enough." That’s why we have a hard time predicting how real bodies will behave, and why experimentation is so important in aeronautics.

How can aviation be grounded in such a muddy understanding of the underlying physics? As with many other scientific phenomena, it’s not always necessary to understand why something works to make use of it. We engineers are happy if we’ve got enough practical knowledge to build flying aircraft. The rest we chalk up to magic.

Further reading:

Gale M. Craig’s 2002 book Introduction to Aerodynamics does an excellent job of breaking down the misconceptions of flight and explaining the reality with enough math for the techies, but plenty of great description for the non-engineer.

NASA’s Glenn Research Center has an educational site that covers these topics (and lots of others, like propulsion) at a basic level, with interactive Java applets. The incorrect theories of lift start at: http://www.grc.nasa.gov/WWW/K-12/airplane/lift1.html


Langewiesche, W., Stick and Rudder, 1990.

Jane’s All the World’s Aircraft 2004-2005, Jane’s Information Group, 2004.

Send questions to Cecil via cecil@straightdope.com.


03-28-2018, 09:24 AM
Is music universal?

January 19, 2018

Dear Cecil:

Scientifically speaking, is music universal? If some advanced extraterrestrial came to earth, would he recognize our music?

Jim B., via the Straight Dope Message Board

Cecil replies:

Is music universal? Well, that’s a bet being made by the group Messaging Extraterrestrial Intelligence. Not content to simply scan the skies for signs of life, the METI folks want to go ahead and say howdy, which they’re attempting to do via a transmission station in Norway, beaming a binary-coded message at one potentially life-friendly exoplanet 12 light-years away. The content of their message? Melodies, created in collaboration with an artsy Barcelona music festival — lest you thought we’d subject our pen pals to, say, “Escape (The Pina Colada Song).”

For the moment, let’s put the more, uh, universal sense of “universal” on hold and start a little closer to home. Leaving aside all the pop-sci baloney about music being a “universal language” — a phrase specially calibrated to drive both musicologists and linguists nuts — we can say nonetheless that music is universal here on earth: virtually all cultures produce it in some form. We’ve been at it a long time, creating tunes for at least 50,000 years — possibly as long as 250,000, if you count our a cappella period.

This long-term commitment has led some scientists to suggest that the tendency to make music must have had some role to play natural-selection-wise. But what? The list of theories is longer than Wagner’s Ring Cycle: music could have been a “proto-language,” a mode of human communication before formal languages developed; music might facilitate social cohesion, like grooming does for other primates; music might soothe cognitive dissonance in our brains and help us perform complex tasks; and so on.

Not exactly settled science, but let’s accept for the sake of discussion that our taste for music is an evolutionary adaptation. Is there any reason to think our friends from the exoplanet GJ 273b — the target audience for the METI transmission — would’ve evolved similarly? I’ll point you toward an intriguing recent paper in the International Journal of Astrobiology, where a few scientists argue that if advanced extraterrestrials have also undergone a process of natural selection (and there’s no reason to think otherwise), they might resemble us in some fundamental biological ways, and to a degree that might surprise us.

But can they boogie? I regret to inform you this paper didn’t go so far as to venture a guess, Jim. It’s not too hard, though, to make the case it shouldn’t matter either way.

Besides the musical passages, the METI message contains a primer on earth math and physics — from basic arithmetic and geometry up through trig, which gets you to the sine function and ultimately to the wave forms that convey audible sound, as well as a clock function meant to get across the idea of measuring time in seconds. The plan is to provide a conceptual toolkit for budding music lovers on other planets: everything they’d need, ideally, to look at the other data and have a shot at figuring out it’s supposed to represent notes at various pitches over varying lengths of time.

Even if the aliens can’t hear the music in the sense that we understand hearing, they’ll be able to perceive the patterns formed by its constituent data, and, with any luck, grok it anyway. The mathematical orderliness of Bach’s Goldberg Variations, you’d think, might have a beauty that transcends mere audibility. And who knows? Maybe the receiving civilization will have some souped-up hi-fi equipment that transposes our sound waves into a spectrum more locally popular — light, say.

Just the same, they may not see what the big deal is. In a 2001 paper, the musicologist David Huron asks us, in the service of explaining various evolutionary theories of music, to engage in a little thought experiment. Imagine you’re an alien scientist visiting earth, Huron writes. A lot of what you see makes sense to you on a behavioral level: eating, sleeping, etc. But you’d also see us creating music, listening to it, incorporating it into our religious ceremonies and mating rituals — and all this you might find yourself baffled by. “Even if Martian anthropologists had ears, I suspect they would be stumped by music,” Huron concludes.

His point is that music as we understand it may be a uniquely human behavior — graspable by extraterrestrial listeners, sure, but they might find our devotion curious. To be fair, this wouldn’t put them too far behind earth anthropologists; as discussed earlier, we’re still puzzling it out ourselves.

From METI’s perspective, this isn’t a bug but a feature: any alien civilization we contact, the thinking goes, is likely to be much more advanced than we are, so bragging about our scientific achievements is probably pointless — they’re there already, whereas music and other earthly arts might be sui generis enough to pique their interest. To me, though, this argues for sending the very best stuff we’ve got. No offense to the composers in Barcelona, but if we’re trying to turn extraterrestrial heads here, why screw around? Play ’em some Stevie Wonder already.

04-03-2018, 03:25 PM
Why can’t we regenerate limbs like other species?


July 19, 1999

Dear Straight Dope:

Why can't human beings regenerate limbs? What mechanism enables other animals to do so?


SDStaff Doug replies:

It’s the price you pay for your more complex cellular organization. The downside is that if you get an arm cut off, you can’t regrow it. But the bright side is you don’t have to live your life in a mud flat eating plankton.

The issue here isn’t so much what other organisms have that we don’t, but rather what we have LOST that just about everything else has always had. The capacity for regeneration is a matter of cellular recognition during development. A good parallel is the comparison between, say, a pack of wolves and a termite colony. In a pack of wolves, one’s identity and relationship to other pack members are flexible things. You can be the alpha male one day, and kicked out of the pack the next (though a male wolf will not suddenly turn into a female wolf, so there ARE limits). In a termite, once you reach adulthood as a queen/king, you are STUCK.

This same sort of interaction takes place in most organisms at the cellular level. In many organisms, cellular identity has a certain degree of flexibility, and what a given cell turns into depends on what its neighbors are. But there are often restrictions built into this, tending to become more restrictive as an organism becomes more complex. So, just as a termite MAY be able to switch from worker to queen if the switch occurs before adulthood, but loses that ability later, some organisms lose the capacity to regenerate once they’re past a certainly developmental stage. Humans fall into that category. If a newborn baby loses a fingertip, it will regenerate; if a 30-year-old loses a fingertip, that’s that.

What it comes down to is that when an organism is wounded, depending on which type of tissue is involved at the wound site, the repair mechanism may or may not involve cells that still have flexibility in their “programming.” If the cells are still flexible, then they act like they might in an embryo; check what your neighbor cells are, and adjust accordingly. It’s like having a big blueprint that only tells you what goes next to what–you can rebuild a limb or organ that way as long as the cells have some sort of orientation info available (which way is up, where the head/tail is, etc.). Such cells will keep budding off new cells until the plan in the blueprint is completed. If the cells are NOT flexible, as in most adult vertebrates, then when the repair occurs, the damage is filled in with tissue lacking much identity, and which doesn’t produce new cells: scar tissue.

To finish the story, some vertebrates can regenerate portions of their anatomy after reaching adulthood, most notably amphibians (salamanders especially; that they should retain such abilities is not surprising when you consider that amphibians undergo metamorphosis, so their cells HAVE TO retain flexibility), and some snakes and lizards that can regrow tails. Not us humans. If we have cells that lose their identity and start to proliferate, we call them “cancer.”

SDStaff Doug, Straight Dope Science Advisory Board

04-03-2018, 03:26 PM
What’s so bad about processed foods?

March 30, 2018

Dear Cecil:

Why are processed foods bad? If I take a chicken breast and process it into a paste, is it worse for my health than if I ate the chicken whole? Please help before I get butt cancer from a chicken nugget!

Jim Huff

Cecil replies:

Let’s get the bad news out of the way, Jim. Last month a big French study came out that tracked the diets of some 100,000 participants to better understand the relationship between cancer and what its authors refer to as “ultra-processed foods,” characterized by “a higher content of total fat, saturated fat, and added sugar and salt, along with a lower fibre and vitamin density.” And though butt cancer wasn’t named as a specific threat, the findings did in fact link a 10 percent proportional increase in consumption of such food with a 10 percent-plus rise in cancer risk overall. Further research is needed, but — brace yourselff — it’s looking like chicken nuggets may not be amazing for your health.

The good news? Your skepticism regarding any categorical condemnation of “processed foods” is entirely warranted. This reality, in fact, is a big part of the organizing principle underlying the French research — about which more below. Not only is there nothing inherently bad about processed foods, the phrase itself is so capacious and variously defined as to be basically meaningless. Unless you’re picking grapes off the vine, you’re eating food that’s been processed somehow or other: milk is pasteurized; wheat is ground; salad mix is washed. A 2000 article in the British Medical Bulletin defined food processing as “any procedure undergone by food commodities after they have left the primary producer, and before they reach the consumer” — mere refrigeration counts.

That’s a distinctly capitalist formulation, but by most definitions, humans have engaged in food processing for millennia. Cooking with fire is the ur-processing method, and our ancestors may have been at it 1.5 million years ago. Other techniques have involved creative ways to unlock nutrition in food or extend its shelf life. In the former category, see e.g. nixtamalization, the practice of treating maize with limestone or lye, which helped the Aztecs and Mayans get more protein and disease-preventing vitamins in their diet. For the latter, see basically any form of fermentation: milk turned into cheese and yogurt is the most widespread, but you’ve also got Korean kimchi (fermented vegetables), Nigerian ogi (fermented grains), and assorted smelly preserved-fish dishes encountered at high latitudes, including Swedish surströmming, Norwegian rakfisk, and “stinkheads,” which the Yupik people of Alaska prepare by burying salmon heads in the ground and leaving them to age.

Many methods of fermenting, curing, etc. rely on salt — an extremely consequential ingredient in this realm, particularly prior to refrigeration. But salt took its own place among processed foods early in the 20th century when we began fortifying it with iodine, essential for thyroid function. In coastal regions, iodine in the soil makes its way into the groundwater, but further inland you won’t find enough of it occurring naturally, and till the 1920s there was a huge swath of the northern U.S., stretching from the Appalachians to the Cascades — the “goiter belt,” they called it — where between a quarter and 70 percent of all kids displayed visibly enlarged thyroids. Just as adding vitamin D to milk took care of our national rickets problem, iodized salt pretty much wiped out goiter, plus some serious developmental disorders also associated with iodine deficiency. It’s been floated as one factor behind what’s known as the Flynn effect: the three-point-per-decade rise in IQ observed in developed countries over the 20th century.

So yeah, food processing has done a thing or two for humanity. We haven’t even touched on the fact that urbanization would have been a hell of a lot harder without food preserved for shipping from the hinterlands, or that the zillion hours of food-prep labor we’ve saved by not making everything from scratch would have fallen disproportionately on women’s shoulders.

Do “processed foods,” then, deserve their bad rap? Answer: no, which is something public-health experts are coming around on. The French study discussed above uses a four-category food-classification system proposed by Brazilian researchers in 2010 that separates harmless or beneficial food processing from the problematic sort. NOVA, as the scheme’s called, distinguishes between unprocessed or minimally processed foods (e.g. meats, plants); processed culinary ingredients (sugar, vegetable oils); processed foods (canned vegetables, cured meats); and, finally, ultra-processed foods, defined as “industrial formulations with five or more and usually many ingredients,” including “substances not commonly used in culinary preparations.”

And yes, the NOVA authors list “poultry and fish ‘nuggets’” in this last group. Again, one thing typifying ultra-processed foods is low nutrient density and high energy density, what with all that added fat and carbs. Those traits are much of (a) the real problem, as far as healthy eating goes, and (b) why Doritos taste so good. But I hope by now you’re distinguishing this kind of stuff from your homemade chicken paste, which isn’t the kind of processed food you need to worry about. Appealing as it sounds, though, I’m afraid I’ve got dinner plans.

Cecil Adams

04-03-2018, 03:27 PM
What’s the life span of a skyscraper?

February 9, 2018

Dear Cecil:

An engineering professor used to tell our class, "Everything eventually fails. The question is, when?" So what happens with a mammoth building? What's the intended life span of, say, the Willis Tower? The way cities are packed, I can't imagine a demolition crew can just drop a hundred-story building without causing chaos.

Concerned Citizen in Chicago

Cecil replies:

I’ll concede that “Everything eventually fails” is more than useful enough as credos go. But there’s a power at work here that can give even the ravages of time a fair fight: the profit motive. Last winter, the owners of the Willis Tower announced a $500 million plan to modernize the 45-year-old building, and the city started issuing permits over the summer. Clearly somebody doesn’t think that thing’s coming down anytime soon.

Renovating and retrofitting is increasingly the name of the game when it comes to giant buildings, the current thinking being that it’s financially (not to mention ecologically) smarter to refresh them periodically than to tear down and start anew. Most experts figure there’s no reason an appropriately upkept skyscraper can’t stay there pretty much indefinitely.

Without upkeep? As we discussed in a 2016 column, towers in a low-lying city like New York, for instance, would have only about 50 years of life in them absent human intervention: water erodes foundations if no one’s around to continually pump it out. In general, if you want to keep a structure standing, you’re going to need a sound water-management plan; foundation aside, one requirement for skyscraper longevity is that their steel-reinforced concrete bones don’t get exposed to rain, the acid in which eats at the limestone content.

But assuming they get proper TLC, the dinosaurs of the Manhattan skyline will likely hang around a while yet. Sure, we can expect to lose a few old office towers here and there in the coming decades, mainly structures that were built when energy was cheap and nowadays cost a fortune to heat and cool — all that single-pane glass, etc. But by and large in New York, market forces and building code collude to keep old buildings standing. Many skyscrapers went up in an era of fewer regulatory constraints, so developers thinking about a tear-down today may be looking at a necessarily smaller building in its place — and thus less rent revenue going forward.

Now, Tokyo is another story. Owing to the local mix of property values, zoning laws, and design standards, it often makes more sense there to knock down and rebuild. As the New York Times put it a couple of years ago, there’s a bull market for demolition in Japan, and the nation is becoming a world leader in the fine art of removing skyscrapers.

The fine art? I’m guessing when you’re picturing the “chaos” of skyscraper demolition, CCC, you’re investing the scene with a lot of unwarranted drama — dynamite, countdown, plunger-style detonator, great clouds of dust and debris. In fact, the implosion method is now used for only about 2 percent of demolitions; most buildings, particularly in dense urban areas, are taken apart more laboriously, using cranes and elbow grease.

Those in the deconstruction business are constantly coming up with new techniques, though, and right now is a particularly exciting time to be a building wrecker. In South Africa, they’re using high-pressure gas canisters in place of dynamite to break up concrete — quieter, less violent, and a lot less permitting required. In France, remote-controlled hydraulic devices push over supporting walls on midlevel floors, causing the top of the building to cave in on itself and pancake the rest on its way down.

But the Japanese are out ahead of the pack. They’ve got lots of buildings to practice on, many just too tall and too tightly surrounded to raze the old-fashioned way, and since 2002 they’ve been bound by a stringent law mandating the reuse of building materials, encouraging them to make as little mess as possible. As such, Japanese companies have worked up startlingly gentle demolition techniques — you can find trippy time-lapse videos where structures appear to just slowly sink into the ground. In one method, the roof is held up by jacks as workers remove the top floor in its entirety; the roof is then lowered, and the process continues until the building’s disappeared. Employing a similar concept, another version starts from the bottom: workers jack up everything above the ground floor, then take that out; rinse, repeat.

Fancy, and not just on the safety front: these demolition firms boast that their recycling of materials and relatively modest use of heavy machinery leads to impressive reductions in carbon emissions — up to 85 percent in some cases. Plus, the whole deal looks about as quiet and smooth-running as the Tokyo subway. There’s no indication such techniques will make it to New York or Chicago in the near future, but neither will the incentives to demolish in the first place. Nor will notably quiet and smooth-running subways, for that matter. In more ways than one, we’re stuck with the cities we’ve got.

Cecil Adams

04-05-2018, 10:48 AM
The center of the Milky Way is teeming with black holes

Associated Press / 09:38 AM April 05, 2018

WASHINGTON - The center of our galaxy is teeming with black holes, sort of like a Times Square for strange super gravity objects, astronomers discovered.

For decades, scientists theorized that circling in the center of galaxies, including ours, were lots of stellar black holes , collapsed giant stars where the gravity is so strong even light doesn’t get out. But they hadn’t seen evidence of them in the Milky Way core until now.

Astronomers poring over old x-ray observations have found signs of a dozen black holes in the inner circle of the Milky Way. And since most black holes can’t even be spotted that way, they calculate that there are likely thousands of them there. They estimate it could be about 10,000, maybe more, according to a study in Wednesday’s journal Nature .

“There’s lots of action going on there,” said study lead author Chuck Hailey, a Columbia University astrophysicist. “The galactic center is a strange place. That’s why people like to study it.”

The stellar black holes are in addition to — and essentially circling — the already known supermassive black hole, called Sagittarius A , that’s parked at the center of the Milky Way.

In the rest of the massive Milky Way, scientists have only spotted about five dozen black holes so far, Hailey said.

The newly discovered black holes are within about 19.2 trillion miles (30.9 trillion kilometers) of the supermassive black hole at the center. So there’s still a lot of empty space and gas amid all those black holes. But if you took the equivalent space around Earth there would be zero black holes, not thousands, Hailey said.

Earth is in spiral arm around 3,000 light years away from the center of the galaxy. (A light year is 5.9 trillion miles, or 9.5 trillion kilometers.)

Harvard astronomer Avi Loeb, who wasn’t part of the study, praised the finding as exciting but confirming what scientists had long expected.

The newly confirmed black holes are about 10 times the mass of our sun, as opposed to the central supermassive black hole, which has the mass of 4 million suns. Also the ones spotted are only the type that are binary , where a black hole has partnered with another star and together they emit large amount of x-rays as the star’s outer layer is sucked into the black hole. Those x-rays are what astronomers observe.

When astronomers look at closer binary black hole systems they could then see the ratio between what’s visible and what’s too faint to be observed from far away. Using that ratio, Hailey figures that even though they only spotted a dozen there must be 300 to 500 binary black hole systems.

But binary black hole systems are likely only 5 percent of all black holes, so that means there are really thousands of them, Hailey said.

There are good reasons the Milky Way’s black holes tend to be in the center of the galaxy, Hailey said.

First, their mass tends to pull them to the center. But mostly the center of the galaxy is the perfect “hot house” for black hole formation, with lots of dust and gas.

Hailey said it is “sort of like a little farm where you have all the right conditions to produce and hold on to a large number of black holes.”

Sam Miguel
07-27-2018, 09:39 AM
From the New York Times online - - -

The Ignorant Do Not Have a Right to an Audience

By Bryan W. Van Norden

Mr. Van Norden is a professor of philosophy.

June 25, 2018

On June 17, the political commentator Ann Coulter, appearing as a guest on Fox News, asserted that crying migrant children separated from their parents are “child actors.” Does this groundless claim deserve as much airtime as, for example, a historically informed argument from Ta-Nehisi Coates that structural racism makes the American dream possible?

Jordan Peterson, a professor of psychology at the University of Toronto, has complained that men can’t “control crazy women” because men “have absolutely no respect” for someone they cannot physically fight. Does this adolescent opinion deserve as much of an audience as the nuanced thoughts of Kate Manne, a professor of philosophy at Cornell University, about the role of “himpathy” in supporting misogyny?

We may feel certain that Coulter and Peterson are wrong, but some people feel the same way about Coates and Manne. And everyone once felt certain that the Earth was the center of the solar system. Even if Coulter and Peterson are wrong, won’t we have a deeper understanding of why racism and sexism are mistaken if we have to think for ourselves about their claims? And “who’s to say” that there isn’t some small fragment of truth in what they say?

If this specious line of thought seems at all plausible to you, it is because of the influence of “On Liberty,” published in 1859 by the English philosopher John Stuart Mill. Mill’s argument for near-absolute freedom of speech is seductively simple. Any given opinion that someone expresses is either wholly true, partly true or false.

To claim that an unpopular or offensive opinion cannot be true “is to assume our own infallibility.” And if an offensive opinion is true, to limit its expression is clearly bad for society. If an opinion is partly true, we should listen to it, because “it is only by the collision of adverse opinions, that the remainder of the truth has any chance of being supplied.” And even if an opinion is false, society will benefit by examining the reasons it is false. Unless a true view is challenged, we will hold it merely “in the manner of a prejudice, with little comprehension or feeling of its rational grounds.”

The problem with Mill’s argument is that he takes for granted a naïve conception of rationality that he inherited from Enlightenment thinkers like René Descartes. For such philosophers, there is one ahistorical rational method for discovering truth, and humans (properly educated) are approximately equal in their capacity for appreciating these truths. We know that “of all things, good sense is the most fairly distributed,” Descartes assures us, because “even those who are the hardest to satisfy in every other respect never desire more of it than they already have.”

Of course, Mill and Descartes disagreed fundamentally about what the one ahistorical rational method is — which is one of the reasons for doubting the Enlightenment dogma that there is such a method.

If you do have faith in a universal method of reasoning that everyone accepts, then the Millian defense of absolute free speech is sound. What harm is there in people hearing obvious falsehoods and specious argumentation if any sane and minimally educated person can see through them? The problem, though, is that humans are not rational in the way Mill assumes. I wish it were self-evident to everyone that we should not discriminate against people based on their sexual orientation, but the current vice president of the United States does not agree. I wish everyone knew that it is irrational to deny the evidence that there was a mass shooting in Sandy Hook, but a syndicated radio talk show host can make a career out of arguing for the contrary.

Historically, Millian arguments have had some good practical effects. Mill followed Alexis de Tocqueville in identifying “the tyranny of the majority” as an ever-present danger in democracies. As an advocate of women’s rights and an opponent of slavery, Mill knew that many people then regarded even the discussion of these issues as offensive. He hoped that by making freedom of speech a near absolute right he could guarantee a hearing for opinions that were true but unpopular among most of his contemporaries.

However, our situation is very different from that of Mill. We are seeing the worsening of a trend that the 20th century German-American philosopher Herbert Marcuse warned of back in 1965: “In endlessly dragging debates over the media, the stupid opinion is treated with the same respect as the intelligent one, the misinformed may talk as long as the informed, and propaganda rides along with education, truth with falsehood.” This form of “free speech,” ironically, supports the tyranny of the majority.

The media are motivated primarily by getting the largest audience possible. This leads to a skewed conception about which controversial perspectives deserve airtime, and what “both sides” of an issue are. How often do you see controversial but well-informed intellectuals like Noam Chomsky and Martha Nussbaum on television? Meanwhile, the former child-star Kirk Cameron appears on television to explain that we should not believe in evolutionary theory unless biologists can produce a “crocoduck” as evidence. No wonder we are experiencing what Marcuse described as “the systematic moronization of children and adults alike by publicity and propaganda.”

Marcuse was insightful in diagnosing the problems, but part of the solution he advocated was suppressing right-wing perspectives. I believe that this is immoral (in part because it would be impossible to do without the exercise of terror) and impractical (given that the internet was actually invented to provide an unblockable information network). Instead, I suggest that we could take a big step forward by distinguishing free speech from just access. Access to the general public, granted by institutions like television networks, newspapers, magazines, and university lectures, is a finite resource. Justice requires that, like any finite good, institutional access should be apportioned based on merit and on what benefits the community as a whole.

There is a clear line between censoring someone and refusing to provide them with institutional resources for disseminating their ideas. When Nathaniel Abraham was fired in 2004 from his position at Woods Hole Oceanographic Institute because he admitted to his employer that he did not believe in evolution, it was not a case of censorship of an unpopular opinion. Abraham thinks that he knows better than other scientists (and better than other Christians, like Pope Francis, who reminded the faithful that God is not “a magician, with a magic wand”). Abraham has every right to express his ignorant opinion to any audience that is credulous enough to listen. However, Abraham does not have a right to a share of the intellectual capital that comes from being associated with a prestigious scientific institution like Woods Hole.

Similarly, the top colleges and universities that invite Charles Murray to share his junk science defenses of innate racial differences in intelligence (including Columbia and New York University) are not promoting fair and balanced discourse. For these prestigious institutions to deny Murray an audience would be for them to exercise their fiduciary responsibility as the gatekeepers of rational discourse. We have actually seen a good illustration of what I mean by “just access” in ABC’s courageous decision to cancel “Roseanne,” its highest-rated show. Starring on a television show is a privilege, not a right. Roseanne compared a black person to an ape. Allowing a show named after her to remain on the air would not be impartiality; it would be tacitly endorsing the racist fantasy that her views are part of reasonable mainstream debate.

Sam Miguel
07-27-2018, 09:40 AM
^^^ (Continued)

Donald Trump, first as candidate and now as president, is such a significant news story that responsible journalists must report on him. But this does not mean that he should be allowed to set the terms of the debate. Research shows that repeatedly hearing assertions increases the likelihood of belief — even when the assertions are explicitly identified as false. Consequently, when journalists repeat Trump’s repeated lies, they are actually increasing the probability that people will believe them.

Even when journalistic responsibility requires reporting Trump’s views, this does not entail giving all of his spokespeople an audience. MSNBC’s “Morning Joe,” set a good precedent for just access by banning from the show Kellyanne Conway for casually spouting “alternative facts.”

Marcuse also suggested, ominously, that we should not “renounce a priori violence against violence.” Like most Americans, I spontaneously cheered when I saw the white nationalist Richard Spencer punched in the face during an interview. However, as I have noted elsewhere, Mahatma Gandhi and the Rev. Martin Luther King, Jr. showed us that nonviolent protest is not only a moral demand (although it is that too); it is the highest strategic cunning. Violence plays into the hands of our opponents, who relish the opportunity to play at being martyrs. Consequently, while it was wrong for Middlebury College to invite Murray to speak, it was even more wrong for students to assault Murray and a professor escorting him across campus. (Ironically, the professor who was injured in this incident is a critic of Murray who gave a Millian defense of allowing him to speak on campus.)

What just access means in terms of positive policy is that institutions that are the gatekeepers to the public have a fiduciary responsibility to award access based on the merit of ideas and thinkers. To award space in a campus lecture hall to someone like Peterson who says that feminists “have an unconscious wish for brutal male domination,” or to give time on a television news show to someone like Coulter who asserts that in an ideal world all Americans would convert to Christianity, or to interview a D-list actor like Jenny McCarthy about her view that actual scientists are wrong about the public health benefits of vaccines is not to display admirable intellectual open-mindedness. It is to take a positive stand that these views are within the realm of defensible rational discourse, and that these people are worth taking seriously as thinkers.

Neither is true: These views are specious, and those who espouse them are, at best, ignorant, at worst, sophists. The invincibly ignorant and the intellectual huckster have every right to express their opinions, but their right to free speech is not the right to an audience.

Bryan W. Van Norden is a professor of philosophy at Wuhan University, Yale-NUS College and Vassar College. He is the author, most recently, of “Taking Back Philosophy: A Multicultural Manifesto.”

Sam Miguel
07-27-2018, 09:53 AM
From the New York Times online - - -

What Religion Gives Us (That Science Can’t)

By Stephen T. Asma

Mr. Asma is a professor of philosophy.

June 3, 2018

It’s a tough time to defend religion. Respect for it has diminished in almost every corner of modern life — not just among atheists and intellectuals, but among the wider public, too. And the next generation of young people looks likely to be the most religiously unaffiliated demographic in recent memory.

There are good reasons for this discontent: continued revelations of abuse by priests and clerics, jihad campaigns against “infidels” and homegrown Christian hostility toward diversity and secular culture. This convergence of bad behavior and bad press has led many to echo the evolutionary biologist E. O. Wilson’s claim that “for the sake of human progress, the best thing we could possibly do would be to diminish, to the point of eliminating, religious faiths.”

Despite the very real problems with religion — and my own historical skepticism toward it — I don’t subscribe to that view. I would like to argue here, in fact, that we still need religion. Perhaps a story is a good way to begin.

One day, after pompously lecturing a class of undergraduates about the incoherence of monotheism, I was approached by a shy student. He nervously stuttered through a heartbreaking story, one that slowly unraveled my own convictions and assumptions about religion.

Five years ago, he explained, his older teenage brother had been brutally stabbed to death, viciously attacked and mutilated by a perpetrator who was never caught. My student, his mother and his sister were shattered. His mother suffered a mental breakdown soon afterward and would have been institutionalized if not for the fact that she expected to see her slain son again, to be reunited with him in the afterlife where she was certain his body would be made whole. These bolstering beliefs, along with the church rituals she engaged in after her son’s murder, dragged her back from the brink of debilitating sorrow, and gave her the strength to continue raising her other two children — my student and his sister.

To the typical atheist, all this looks irrational, and therefore unacceptable. Beliefs, we are told, must be aligned with evidence, not mere yearning. Without rational standards, like those entrenched in science, we will all slouch toward chaos and end up in pre-Enlightenment darkness.

I do not intend to try to rescue religion as reasonable. It isn’t terribly reasonable. But I do want to argue that its irrationality does not render it unacceptable, valueless or cowardly. Its irrationality may even be the source of its power.

The human brain is a kludge of different operating systems: the ancient reptilian brain (motor functions, fight-or-flight instincts), the limbic or mammalian brain (emotions) and the more recently evolved neocortex (rationality). Religion irritates the rational brain because it trades in magical thinking and no proof, but it nourishes the emotional brain because it calms fears, answers to yearnings and strengthens feelings of loyalty.

According to prominent neuroscientists like Jaak Panksepp, Antonio Damasio and Kent Berridge, as well as neuropsychoanalysts like Mark Solms, our minds are motivated primarily by ancient emotional systems, like fear, rage, lust, love and grief. These forces are adaptive and help us survive if they are managed properly — that is if they are made strong enough to accomplish goals of survival, but not so strong as to overpower us and lead to neuroses and maladaptive behavior.

My claim is that religion can provide direct access to this emotional life in ways that science does not. Yes, science can give us emotional feelings of wonder at the majesty of nature, but there are many forms of human suffering that are beyond the reach of any scientific alleviation. Different emotional stresses require different kinds of rescue. Unlike previous secular tributes to religion that praise its ethical and civilizing function, I think we need religion because it is a road-tested form of emotional management.

Of course, there is a well-documented dark side to spiritual emotions. Religious emotional life tilts toward the melodramatic. Religion still trades readily in good-and-evil narratives, and it gives purchase to testosterone-fueled revenge fantasies and aggression. While this sort of zealotry is undeniably dangerous, most religion is actually helpful to the average family struggling to eke out a living in trying times.

Religious rituals, for example, surround the bereaved person with our most important resource — other people. Even more than other mammals, humans are extremely dependent on others — not just for acquiring resources and skills, but for feeling well. And feeling well is more important than thinking well for my survival.

Religious practice is a form of social interaction that can improve psychological health. When you’ve lost a loved one, religion provides a therapeutic framework of rituals and beliefs that produce the oxytocin, internal opioids, dopamine and other positive affects that can help with coping and surviving. Beliefs play a role, but they are not the primary mechanisms for delivering such therapeutic power. Instead, religious practice (rituals, devotional activities, songs, prayer and story) manage our emotions, giving us opportunities to express care for each other in grief, providing us with the alleviation of stress and anxiety, or giving us direction and an outlet for rage.

Atheists like Richard Dawkins, E. O. Wilson and Sam Harris, are evaluating religion at the neocortical level — their criteria for assessing it is the rational scientific method. I agree with them that religion fails miserably at the bar of rational validity, but we’re at the wrong bar. The older reptilian brain, built by natural selection for solving survival challenges, was not built for rationality. Emotions like fear, love, rage — even hope or anticipation — were selected for because they helped early mammals flourish. In many cases, emotions offer quicker ways to solve problems than deliberative cognition.

For us humans, the interesting issue is how the old animal operating system interacts with the new operating system of cognition. How do our feelings and our thoughts blend together to compose our mental lives and our behaviors? The neuroscientist Antonio Damasio has shown that emotions saturate even the seemingly pure information-processing aspects of rational deliberation. So something complicated is happening when my student’s mother remembers and projects her deceased son, and embeds him in a religious narrative that helps her soldier on.

No amount of scientific explanation or sociopolitical theorizing is going to console the mother of the stabbed boy. Bill Nye the Science Guy and Neil deGrasse Tyson will not be much help, should they decide to drop over and explain the physiology of suffering and the sociology of crime. But the magical thinking that she is going to see her murdered son again, along with the hugs from and songs with fellow parishioners, can sustain her. If this emotionally grounded hope gives her the energy and vitality to continue caring for her other children, it can do the same for others. And we can see why religion persists.

Those of us in the secular world who critique such emotional responses and strategies with the refrain, “But is it true?” are missing the point. Most religious beliefs are not true. But here’s the crux. The emotional brain doesn’t care. It doesn’t operate on the grounds of true and false. Emotions are not true or false. Even a terrible fear inside a dream is still a terrible fear. This means that the criteria for measuring a healthy theory are not the criteria for measuring a healthy emotion. Unlike a healthy theory, which must correspond with empirical facts, a healthy emotion is one that contributes to neurochemical homeostasis or other affective states that promote biological flourishing.

Sam Miguel
07-27-2018, 09:54 AM
^^^ (Continued)

Finally, we need a word or two about opiates. The modern condemnation of religion has followed the Marxian rebuke that religion is an opiate administered indirectly by state power in order to secure a docile populace — one that accepts poverty and political powerlessness, in hopes of posthumous supernatural rewards. “Religion is the sigh of the oppressed creature,” Marx claimed, “the heart of a heartless world, and the soul of soulless conditions. It is the opium of the people.”

Marx, Mao and even Malcolm X leveled this critique against traditional religion, and the critique lives on as a disdainful last insult to be hurled at the believer. I hurled it myself many times, thinking that it was a decisive weapon. In recent years, however, I’ve changed my mind about this criticism.

First, religion is energizing as often as it is anesthetizing. As often as it numbs or sedates, religion also riles up and invigorates the believer. This animating quality of religion can make it more hazardous to the state than it is tranquilizing, and it also inspires a lot of altruistic philanthropy.

Second, what’s so bad about pain relief anyway? If my view of religion is primarily therapeutic, I can hardly despair when some of that therapy takes the form of palliative pain management. If atheists think it’s enough to dismiss the believer on the grounds that he should never buffer the pains of life, then I’ll assume the atheist has no recourse to any pain management in his own life. In which case, I envy his remarkably good fortune.

For the rest of us, there is aspirin, alcohol, religion, hobbies, work, love, friendship. After all, opioids — like endorphins — are innate chemical ingredients in the human brain and body, and they evolved, in part, to occasionally relieve the organism from misery. To quote the well-known phrase by the German humorist Wilhelm Busch, “He who has cares has brandy, too.”

We need a more clear-eyed appreciation of the role of cultural analgesics. It is not enough to dismiss religion on the grounds of some puritanical moral judgment about the weakness of the devotee. Religion is the most powerful cultural response to the universal emotional life that connects us all.

Stephen Asma is a professor of philosophy at Columbia College Chicago, and the author of the forthcoming, “Why We Need Religion.”

Sam Miguel
07-27-2018, 10:04 AM
From the New York Times online - - -

Rebooting the Ethical Soldier

By Robert H. Latiff

Mr. Latiff is a retired Air Force major general.

July 16, 2018

In 2014, the United States Army Research Laboratory published a report predicting what the battlefield of 2050 would look like. Not surprisingly, it was a scenario largely driven by technology, and the report described a sort of warfare most people associate with video games or science-fiction movies — combined forces of augmented or enhanced humans, robots operating in swarms, laser weapons, intelligence systems and cyberbots fighting in a highly contested information environment using spoofing, hacking, misinformation and other means of electronic warfare.

In one sense, this is nothing new. The way wars are fought have always changed with technology. But humans themselves don’t change so rapidly. As a retired Air Force major general with special interests in both technology and military ethics, I have a specific concern: that as new weapons technologies make soldiering more lethal, our soldiers will find it more difficult than ever to behave ethically and to abide by the long-established conventions regarding the rules of war.

As the nation with the most powerful military on earth, we are obligated to acknowledge these profound fundamental changes in the experience of soldiering, to confront them and to prepare to adapt.

On battlefields of the past, killing was intentional and intensely personal. In the future, the automated nature of combat, the artificial enhancement of soldiers and the speed and distances involved will threaten to undermine the warrior ethos. This applies not only to conduct between opposing soldiers, but among those fighting on the same side as well. In the past, soldiers took risks for one another. It is not at all clear how the introduction of autonomous machines will change that dynamic. Would a soldier be willing to entrust his life to a machine? Would a fellow soldier put his life or the lives of other soldiers in danger to save an important robot? Should he?

Many soldiers will no longer have to fear close contact and hand-to-hand combat because they will be able to deploy robots and unmanned vehicles at great ranges. However, fear acts as a modulator of behavior, and by reducing it we will likely also remove constraints on unethical behavior. If a soldier cannot see, hear and understand the context of a battlefield or a particular engagement, he is less likely to concern himself with decisions requiring such nuance.

Perhaps a machine can make a utilitarian calculation on proportionality of force, but can it make an empathetic decision? Would it be able to sense an enemy’s wavering determination, for instance, and call off an attack to prevent unnecessary loss of life? Commanders and troops on the ground, in contact with an enemy, have a “feel” for the complexities of the battlefield that cannot be reproduced by a machine.

Computers, artificial intelligence, robots and autonomous systems will create an environment too complex and fast for humans to keep up with, much less control.

Increasingly, warfare will exceed the capacity of the human senses to collect and process data. Department of Defense policy on autonomy in weapons systems says a human will always be part of the decision-making for lethal weapons, but the Pentagon nonetheless spent $2.5 billion in 2017 on artificial intelligence, and every military service is funding work in autonomous systems. In 2019, the Defense Department expects to spend $9.4 billion on unmanned systems, with over $800 million for autonomy, human-machine teaming and swarm research.

Gradually, perhaps imperceptibly, automated systems will function so much more efficiently that humans will become mere bystanders. The soldier will become the slowest element in an engagement, or will simply become irrelevant. Adherence to the rules of war will become less relevant as well.

A separate set of ethical questions are raised by the technologies of human “enhancement” and augmentation, which include improving physical strength, stamina and pain tolerance, as well as using neurological implants and stimulation to restore brain function and enhance learning.

Can soldiers under the influence of behavior-modifying drugs or electronics be held to account for their actions? If the soldier is using drugs to enhance his cognition or reduce his fear, what is the role of free will? Might a soldier who fears nothing unnecessarily place himself, his unit or innocent bystanders at risk? What about the impact of memory-altering drugs on the soldier’s sense of guilt, which might be important in decisions about unnecessary and superfluous suffering?

These are important decisions in war, and they form the basis for many of the tenets of “just war” theory. Gen. Paul Selva, the vice chairman of the Joint Chiefs of staff, supports “keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don’t know how to control.”

The role of revising and recasting these conventions should be taking place at the highest levels of government. So far, it hasn’t. The White House’s Select Committee on Artificial Intelligence, formed in May, has not even acknowledged the major ethical issues surrounding A.I. that have been very publicly raised by an increasing number of scientists and technology experts like Elon Musk, Bill Gates and Stephen Hawking.

While it is important that leaders openly recognize the critical nature of these issues, the Department of Defense needs to follow up on its 2012 directive on autonomy with guidelines for researchers and commanders. It should require that both researchers and military commanders question — throughout the development process and long before the systems are ready for deployment — how the systems will be used and whether that use might violate any of laws of armed conflict and international humanitarian law.

Historically, the United States has led the world in technology development, and thus our use of questionable weapons or methods will be well noted by others. Sadly, in the period after the Sept. 11 attacks, the United States resorted to torture of enemy detainees. While most senior leaders have denounced the practice, the fact remains that the nation crossed an important moral threshold. Knowing that, future enemies — even civilized ones — may be less inhibited in employing the same methods against us. The same will be true for advanced technologies.

With new warfare technologies, we now have an opportunity to again demonstrate our leadership in human rights by ensuring that our young soldiers know how to use the new weapons in ethical and humane ways.

Robert H. Latiff is a retired Air Force major general and the author of “Future War: Preparing for the New Global Battlefield.”