Search This Blog

Showing posts with label Philosophy. Show all posts
Showing posts with label Philosophy. Show all posts

One in five teenagers will experiment with philosophy

Dec 3, 2012

Doubting is a gateway to thinking. Stop it before it starts.
"I found copies of Kant in your room. I'm concerned."
Parents who use logic, will raise kids who use logic.
"You've been doing thought experiments haven't you?"
Learn to recognize the early warning signs.

40 Belief-Shaking Remarks

Oct 25, 2012

If there’s one thing Friedrich Nietzsche did well, it’s obliterate feel-good beliefs people have about themselves. He has been criticized for being a misanthrope, a subvert, a cynic and a pessimist, but I think these assessments are off the mark. I believe he only wanted human beings to be more honest with themselves.

He did have a remarkable gift for aphorism — he once declared, “It is my ambition to say in ten sentences what others say in a whole book.” A hundred years after his death, Nietzsche retains his disturbing talent for turning a person’s worldview upside-down with one jarring remark.

Even today his words remain controversial. They hit nerves. Most of his views are completely at odds with the status quo.

Here are 40 unsympathetic statements from the man himself. Many you’ll agree with. Others you will resist, but these are the ones to pay the most attention to — your beliefs are being challenged. It’s either an opportunity to grow, or to insist that you already know better. If any of them hit a nerve in you, ask yourself why.

***

1. People who have given us their complete confidence believe that they have a right to ours. The inference is false, a gift confers no rights.

2. He that humbleth himself wishes to be exalted.

3. The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently.

4. There are no facts, only interpretations.

5. Morality is but the herd-instinct in the individual.

6. No one talks more passionately about his rights than he who in the depths of his soul doubts whether he has any.

7. Without music, life would be a mistake.

8. Anyone who has declared someone else to be an idiot, a bad apple, is annoyed when it turns out in the end that he isn’t.

9. In large states public education will always be mediocre, for the same reason that in large kitchens the cooking is usually bad.

10. The man of knowledge must be able not only to love his enemies but also to hate his friends.

11. A casual stroll through the lunatic asylum shows that faith does not prove anything.

12. We often refuse to accept an idea merely because the way in which it has been expressed is unsympathetic to us.

13. No victor believes in chance.

14. Convictions are more dangerous foes of truth than lies.

15. Talking much about oneself can also be a means to conceal oneself.

16. It is not a lack of love, but a lack of friendship that makes unhappy marriages.

17. The essence of all beautiful art, all great art, is gratitude.

18. The future influences the present just as much as the past.

19. The most common lie is that which one tells himself; lying to others is relatively an exception.

20. I counsel you, my friends: Distrust all in whom the impulse to punish is powerful.

21. Rejoicing in our joy, not suffering over our suffering, is what makes someone a friend.

22. God is a thought who makes crooked all that is straight.

23. Success has always been a great liar.

24. Nothing on earth consumes a man more quickly than the passion of resentment.

25. What do you regard as most humane? To spare someone shame.

26. Whatever is done for love always occurs beyond good and evil.

27. When a hundred men stand together, each of them loses his mind and gets another one.

28. When one has a great deal to put into it a day has a hundred pockets.

29. Whoever despises himself nonetheless respects himself as one who despises.

30. All things are subject to interpretation. Whichever interpretation prevails at a given time is a function of power and not truth.

31. What is good? All that heightens the feeling of power, the will to power, power itself. What is bad? All that is born of weakness. What is happiness? The feeling that power is growing, that resistance is overcome.

32. Fear is the mother of morality.

33. A politician divides mankind into two classes: tools and enemies.

34. Everyone who has ever built anywhere a new heaven first found the power thereto in his own hell.

35. There is more wisdom in your body than in your deepest philosophy.

36. The mother of excess is not joy but joylessness.

37. The Kingdom of Heaven is a condition of the heart — not something that comes upon the earth or after death.

38. What is the mark of liberation? No longer being ashamed in front of oneself.

39. Glance into the world just as though time were gone: and everything crooked will become straight to you.

40. We should consider every day lost on which we have not danced at least once.

The Illusion of Free Choice

Oct 24, 2012

Left or Right, Both lead to Slaughter House.

2045: The Year Man Becomes Immortal Pg.5

Sep 1, 2012

  1  2  3  4  5 

Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. "Generally speaking," he says, "the core of a disagreement I'll have with a critic is, they'll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I don't believe I'm underestimating the challenge. I think they're underestimating the power of exponential growth."

This position doesn't make Kurzweil an outlier, at least among Singularitarians. Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It's called the Blue Brain project, and it's an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM's Blue Gene super-computer. So far, Markram's team has managed to simulate one neocortical column from a rat's brain, which contains about 10,000 neurons. Markram has said that he hopes to have a complete virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you'd then have to educate the brain, and who knows how long that would take?)

By definition, the future beyond the Singularity is not knowable by our linear, chemical, animal brains, but Kurzweil is teeming with theories about it. He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware. "When people look at the implications of ongoing exponential growth, it gets harder and harder to accept," he says. "So you get people who really accept, yes, things are progressing exponentially, but they fall off the horse at some point because the implications are too fantastic. I've tried to push myself to really look."

In Kurzweil's future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the molecular level. Progress hyperaccelerates, and every hour brings a century's worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father back to life.

We can scan our consciousnesses into computers and enter a virtual existence or swap our bodies for immortal robots and light out for the edges of space as intergalactic godlings. Within a matter of centuries, human intelligence will have re-engineered and saturated all the matter in the universe. This is, Kurzweil believes, our destiny as a species.

Or it isn't. When the big questions get answered, a lot of the action will happen where no one can see it, deep inside the black silicon brains of the computers, which will either bloom bit by bit into conscious minds or just continue in ever more brilliant and powerful iterations of nonsentience.

But as for the minor questions, they're already being decided all around us and in plain sight. The more you read about the Singularity, the more you start to see it peeking out at you, coyly, from unexpected directions. Five years ago we didn't have 600 million humans carrying out their social lives over a single electronic network. Now we have Facebook. Five years ago you didn't see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics. Now we have iPhones. Is it an unimaginable step to take the iPhones out of our hands and put them into our skulls?

Already 30,000 patients with Parkinson's disease have neural implants. Google is experimenting with computers that can drive cars. There are more than 2,000 robots fighting in Afghanistan alongside the human troops. This month a game show will once again figure in the history of artificial intelligence, but this time the computer will be the guest: an IBM super-computer nicknamed Watson will compete on Jeopardy! Watson runs on 90 servers and takes up an entire room, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter. It got every question it answered right, but much more important, it didn't need help understanding the questions (or, strictly speaking, the answers), which were phrased in plain English. Watson isn't strong AI, but if strong AI happens, it will arrive gradually, bit by bit, and this will have been one of the bits.

A hundred years from now, Kurzweil and de Grey and the others could be the 22nd century's answer to the Founding Fathers — except unlike the Founding Fathers, they'll still be alive to get credit — or their ideas could look as hilariously retro and dated as Disney's Tomorrowland. Nothing gets old as fast as the future.

But even if they're dead wrong about the future, they're right about the present. They're taking the long view and looking at the big picture. You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another. Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago. Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box. Or maybe you have to think further inside it than anyone ever has before.


  1  2  3  4  5

2045: The Year Man Becomes Immortal Pg.4

  1  2  3  4  5 

But his goal differs slightly from de Grey's. For Kurzweil, it's not so much about staying healthy as long as possible; it's about staying alive until the Singularity. It's an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they'll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans. Alternatively, by then we'll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal.

It's an idea that's radical and ancient at the same time. In "Sailing to Byzantium," W.B. Yeats describes mankind's fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead? But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves. "There are people who can accept computers being more intelligent than people," he says. "But the idea of significant changes to human longevity — that seems to be particularly controversial. People invested a lot of personal effort into certain philosophies dealing with the issue of life and death. I mean, that's the major reason we have religion."

Of course, a lot of people think the Singularity is nonsense — a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience. Most of the serious critics focus on the question of whether a computer can truly become intelligent.

The entire field of artificial intelligence, or AI, is devoted to this question. But AI doesn't currently produce the kind of intelligence we associate with humans or even with talking computers in movies — HAL or C3PO or Data. Actual AIs tend to be able to master only one highly specific domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don't make conversation at parties. They're intelligent, but only if you define intelligence in a vanishingly narrow way. The kind of intelligence Kurzweil is talking about, which is called strong AI or artificial general intelligence, doesn't exist yet.

Why not? Obviously we're still waiting on all that exponentially growing computing power to get here. But it's also possible that there are things going on in our brains that can't be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon. The biologist Dennis Bray was one of the few voices of dissent at last summer's Singularity Summit. "Although biological components act in ways that are comparable to those in electronic circuits," he argued, in a talk titled "What Cells Can Do That Robots Can't," "they are set apart by the huge number of different states they can adopt. Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell. The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events." That makes the ones and zeros that computers trade in look pretty crude.

Underlying the practical challenges are a host of philosophical ones. Suppose we did create a computer that talked and acted in a way that was indistinguishable from a human being — in other words, a computer that could pass the Turing test. (Very loosely speaking, such a computer would be able to pass as human in a blind test.) Would that mean that the computer was sentient, the way a human being is? Or would it just be an extremely sophisticated but essentially mechanical automaton without the mysterious spark of consciousness — a machine with no ghost in it? And how would we know?

Even if you grant that the Singularity is plausible, you're still staring at a thicket of unanswerable questions. If I can scan my consciousness into a computer, am I still me? What are the geopolitics and the socioeconomics of the Singularity? Who decides who gets to be immortal? Who draws the line between sentient and nonsentient? And as we approach immortality, omniscience and omnipotence, will our lives still have meaning? By beating death, will we have lost our essential humanity?

Kurzweil admits that there's a fundamental level of risk associated with the Singularity that's impossible to refine away, simply because we don't know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do. It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don't have to be a super-intelligent cyborg to understand that introducing a superior life-form into your own biosphere is a basic Darwinian error.

If the Singularity is coming, these questions are going to get answers whether we like it or not, and Kurzweil thinks that trying to put off the Singularity by banning technologies is not only impossible but also unethical and probably dangerous. "It would require a totalitarian system to implement such a ban," he says. "It wouldn't work. It would just drive these technologies underground, where the responsible scientists who we're counting on to create the defenses would not have easy access to the tools."

Kurzweil is an almost inhumanly patient and thorough debater. He relishes it. He's tireless in hunting down his critics so that he can respond to them, point by point, carefully and in detail.

  1  2  3  4  5 

2045: The Year Man Becomes Immortal Pg.3

  1  2  3  4  5 

Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity. According to Kurzweil, we're not evolved to think in terms of exponential growth. "It's not intuitive. Our built-in predictors are linear. When we're trying to avoid an animal, we pick the linear prediction of where it's going to be in 20 seconds and what to do about it. That is actually hardwired in our brains."

Here's what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity — never say he's not conservative — at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today.

The Singularity isn't just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.

Not all of them are Kurzweilians, not by a long chalk. There's room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won't happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.

In addition to the Singularity University, which Kurzweil co-founded, there's also a Singularity Institute for Artificial Intelligence, based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.) Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology.

At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in gray parrots and the professional magician and debunker James "the Amazing" Randi. The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading — the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters — handed out pamphlets. An android chatted with visitors in one corner.

After artificial intelligence, the most talked-about topic at the 2010 summit was life extension. Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but solvable problems. Death is one of them. Old age is an illness like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It's not just wishful thinking; there's actual science going on here.

For example, it's well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can't reproduce anymore and dies. But there's an enzyme called telomerase that reverses this process; it's one of the reasons cancer cells live so long. So why not treat regular non-cancerous cells with telomerase? In November, researchers at Harvard Medical School announced in Nature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away. The mice didn't just get better; they got younger.

Aubrey de Grey is one of the world's best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence. He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine. "People have begun to realize that the view of aging being something immutable — rather like the heat death of the universe — is simply ridiculous," he says. "It's just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically. This is why we have vintage cars. It's really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable."

Kurzweil takes life extension seriously too. His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father's genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day. He says his diabetes is essentially cured, and although he's 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger.

  1  2  3  4  5 

2045: The Year Man Becomes Immortal Pg.2

  1  2  3  4  5 

People are spending a lot of money trying to understand it. The three-year-old Singularity University, which offers inter-disciplinary courses of study for graduate students and executives, is hosted by NASA. Google was a founding sponsor; its CEO and co-founder Larry Page spoke there last year. People are attracted to the Singularity for the shock value, like an intellectual freak show, but they stay because there's more to it than they expected. And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of language.

The Singularity isn't a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an "intelligence explosion":

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

The word singularity is borrowed from astrophysics: it refers to a point in space-time — for example, inside a black hole — at which the rules of ordinary physics do not apply. In the 1980s the science-fiction novelist Vernor Vinge attached it to Good's intelligence-explosion scenario. At a NASA symposium in 1993, Vinge announced that "within 30 years, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended."

By that time Kurzweil was thinking about the Singularity too. He'd been busy since his appearance on I've Got a Secret. He'd made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind — Stevie Wonder was customer No. 1 — and made innovations in a range of technical fields, including music synthesizers and speech recognition. He holds 39 patents and 19 honorary doctorates. In 1999 President Bill Clinton awarded him the National Medal of Technology.

But Kurzweil was also pursuing a parallel career as a futurist: he has been publishing his thoughts about the future of human and machine-kind for 20 years, most recently in The Singularity Is Near, which was a best seller when it came out in 2005. A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called The Transcendent Man.) Bill Gates has called him "the best person I know at predicting the future of artificial intelligence."

In real life, the transcendent man is an unimposing figure who could pass for Woody Allen's even nerdier younger brother. Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity's most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He's good-natured about it. His manner is almost apologetic: I wish I could bring you less exciting news of the future, but I've looked at the numbers, and this is what they say, so what else can I tell you?

Kurzweil's interest in humanity's cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress. Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right. "Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project," he says. "So it's like skeet shooting — you can't shoot at the target." He knew about Moore's law, of course, which states that the number of transistors you can put on a microchip doubles about every two years. It's a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve: the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.

As it turned out, Kurzweil's numbers looked a lot like Moore's. They doubled every couple of years. Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900.

Kurzweil then ran the numbers on a whole bunch of other key technological indexes — the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond — the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents. He kept finding the same thing: exponentially accelerating progress. "It's really amazing how smooth these trajectories are," he says. "Through thick and thin, war and peace, boom times and recessions." Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly.

  1  2  3  4  5 

2045: The Year Man Becomes Immortal Pg.1

  1  2  3  4  5 

On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.

On the show, the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200.

Kurzweil then demonstrated the computer, which he built himself — a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil's age than by anything he'd actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she'd been President Lyndon Johnson's first-grade teacher.

But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It's an act of self-expression; you're not supposed to be able to do it if you don't have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.

That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.

Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they're getting faster is increasing.

True? True.

So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.

If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville.

Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity.

The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation.

2  3  4  5 

Skeptic Bibles (9th Edition). Part 2

Aug 8, 2012

Christianity
The belief that a god created a universe 12.75 billion light years across containing 200 billion galaxies, each of which contains an average of more than 200 billion stars, just so he could have a personal relationship with you.

One day Hitler and his officers were out doing Nazi stuff.

Some kids nearby noticed that one of Hitler's officers was bald so they made fun of him. They called him a "bald head."

Hitler said "Ok guys, shoot these kids." So they did, they shot and killed all 42 kids.

This didn't happen, but one time god sent bears to kill 42 kids for calling one of his servants bald.
(2 Kings 2:23,24)

"Oh, I love your religion ...for the crazy! Virgin birth. Water into wine. It's like Harry Potter, but it causes genocide and bad fold music."

- Roger the Alien, American Dad

Eat. Survive. Reproduce.
Eat. Survive. Reproduce.
Eat. Survive. Reproduce.
Eat. Survive. Reproduce.
What's it all about?


The Soul Mate

Aug 4, 2012

Plato defines Soul mate
In his dialogue The Symposium, Plato has Aristophanes present a story about soul mates. Aristophanes states that humans originally had four arms, four legs, and a single head made of two faces, but Zeus feared their power and split them all in half, condemning them to spend their lives searching for the other half to complete them.

Theosophy defines Soul mate

According to Theosophy, whose claims were modified by Edgar Cayce, God created androgynous souls—equally male and female. Later theories postulate that the souls split into separate genders, perhaps because they incurred karma while playing around on the Earth, or "separation from God." Over a number of reincarnations, each half seeks the other. When all karmic debt is purged, the two will fuse back together and return to the ultimate.

Current usage of the concept
In current usage, "soulmate" usually refers to a romantic partner, with the implication of an exclusive lifelong bond.



I think the concept of a soul mate is just an example of mass cognitive dissonance disorder. The notion that everyone has someone out there waiting just for them is feeble minded.

There are over seven billion (7,000,000,000) people on earth, and yet somehow people think they have each have their own special preselected mate living down the street and they will meet, how? Some sort of higher power? I say there is no god and even if there was it wouldn't have any semblance of involvement in getting human being laid.

According to the U.S. Department of Health and Human Services, National Center for Health Statistics the current world sex ratio is 105 boys to 100 girls alive at the moment. So how does this affect the soul mate paradox? Well 2.4% of the world's population are males with no females left for them, so what? Does that mean the approximately 16,800,000,000 males are destined to be homosexual and soul mate matches to each other? But that brings us back to the previous problem of geography, it's not like all sixteen million males are all in the same cluster so how are they each going to meet each other. And if soul mates are the work of god, it was my understanding Christians look down on the homosexual community so that doesn't work either.


Let's take a look at the psychological ramifications of the soul mate concept. It is my belief most men and woman are floozies, promiscuous little fiends solely driven by an innate carnal desire. Society has somewhat developed anti-promiscuous ideology that shuns the whores and harlots from the community but although they have all but been abolished in modern day due to the overwhelming flood of media and it's dogs, traces of the stigma still reside in society, so individuals try to justify their whorish behaviors (for lack of a better term) by explaining they are looking for their soul mate which is an impossibility.

Narcissism will be my final talking point today. We are creating a vastly narcissistic society, and possibly the greatest feeling of power comes from power over a fellow human being. Individuals use the concept of soul  mates to lure unsuspecting victims into a false sense of security and intimacy, betray the other half and feed off the misery that comes from crushing a fellow human being reality. It this way, everyday people can be classified as sociopaths, the same as stalkers and murderers but are not prosecuted because they are not considered violent crimes. I suppose it is just part of socialization, put yourself out there and you will be hurt, it's not against the law because one of the individuals are free to leave the union when ever they so wish. But for the social predator, soul mates are simple another weapon they can use to feed off the suffering that comes from hurting someone that trusted them.

Life Imitating Art

Aug 2, 2012

Anti-mimesis is a philosophical position that holds the direct opposite of mimesis. Its most notable proponent is Oscar Wilde, who held in his 1889 essay The Decay of Lying that "Life imitates Art far more than Art imitates Life". In the essay, written as a Platonic dialogue, Wilde holds that such anti-mimesis "results not merely from Life's imitative instinct, but from the fact that the self-conscious aim of Life is to find expression, and that Art offers it certain beautiful forms through which it may realise that energy.".

Wilde's antimimetic philosophy has had influence on later writers, including Brian Friel. McGrath places it in a tradition of Irish writing, including Wilde and writers such as Synge and Joyce that "elevate[s] blarney (in the form of linguistic idealism) to aesthetic and philosophical distinction", noting that Terry Eagleton observes an even longer tradition that stretches "as far back in Irish thought as the ninth-century theology of John Scottus Eriugena" and "the fantastic hyperbole of the ancient sagas". Wilde's antimimetic idealism, specifically, McGrath describes to be part of the late nineteenth century debate between Romanticism and Realism.

Antimimesis, as set out by Wilde in Decay of Lying is the reverse of the Aristotelian principle of mimesis. Far from art imitating life, as mimesis would hold, Wilde holds that art sets the aesthetic principles by which people perceive life. What is found in life and nature is not what is really there, but is that which artists have taught people to find there, through art. Wilde presents the fogs of London as an example, arguing that although "there may have been fogs for centuries in London", people have only "seen" the "wonderful brown fogs that come creeping down our streets, blurring the gas lamps and turning houses into shadows" because "poets and painters have taught [people] the loveliness of such effects". "They did not exist", asserts Wilde, "till Art had invented them.".

Halliwell asserts that "far from constituting the ne plus ultra of antimimeticism", the notion that life imitates art actually derives from classical notions that can be traced as far back as the writings of Aristophanes of Byzantium, and does not negate mimesis but rather "displace[s] its purpose onto the artlike fashioning of life itself". Halliwell draws a parallel between Wilde's philosophy and Aristophanes' famous question about the comedies written by Menander: "O Menander and Life! Which of you took the other as your model?", noting, however, that Aristophanes was a pre-cursor to Wilde, and not necessarily espousing the positions




Definition for art imitates life:
The observation that a creative work was inspired by true events; based on a true story.that Wilde was later to propound.

Life Lesson

All Women are lesbians.
Every Woman has a boyfriend.
All Women are married.


So don't even try.

It's a simple life philosophy to live by and is guaranteed to improve the lives of everyone that understands the reality of life.

The same principals apply to both men and women. The bottom line is; everyone should stay home, keep to themselves, don't be fooled by social illusions and most importantly have faith in humankind but always know that nobody is trust worthy in life.

Well everybody im single again. but guess what im not upset cuz why would i want someone thats gonna cheat on me.
"Oh no im sorry"

"its ok im better off anyways"

"Yes u deserve better. Men get on my damn nerves!"

"awww wut happend"

"He wanted to b with his wife"

Philosophy, Proof

Jul 26, 2012

If you are reading this. That is evidence you can read.
And if you aren't reading this -- absence of proof is proof of nothing.

Hell

Jul 5, 2012


In many religious traditions, hell is a place of suffering and punishment in an afterlife, often after resurrection. Religions with a linear divine history often depict hells as endless. Religions with a cyclic history often depict a hell as an intermediary period between incarnations. Typically these traditions locate hell under the Earth's external surface and often include entrances to Hell from the land of the living. Other afterlife destinations include Heaven, Purgatory, Paradise, and Limbo.



Punishment in Hell typically corresponds to sins committed during life. Sometimes these distinctions are specific, with damned souls suffering for each sin committed, but sometimes they are general, with condemned sinners relegated to one or more chamber of Hell or to a level of suffering.

But what stature does the meaning of HELL hold in the modern world? It no longer represents infernal punishment or a place for the dammed to suffer an eternity for their sins in life. The concept of hell has become a joke, quite literally.I say this because;

First off, the use of Hell as an exclamation:Used to express annoyance or surprise. A far cry for it's originating use.

Second, the media representation of hell in movies, music, television, etc has devolved into nothing short of a parody of it's former self. When the concept of hell started it was to stand for the worst place in the world where the worst atrocities took place. Now cartoons depict dimwitted characters entering hell with ease and escaping even easier. Songs after song are produced proclaiming deals with the devil, and surviving the forces of hell, all preformed by self-infatuated megalomaniacs with delusions of grandeur.

Now don't get me wrong, by no means am I saying I believe in the notion of Hell, or am I saying anyone that shows signs of a megalomaniac is bad for writing a song about a deal with the devil or what ever the case may be, because music is music and everyone has different reasons for what they do, self-expression, fame, creativity, etc.





Third, the idea of hell has over the years become diluted by it's saturation overuse and affiliation with inferior content. A few hundred years ago if someone was to merely mention the word hell in a conversation it would be so very serious and taken with the utmost regard. Today the word is used frivolously in conversation so much so that it has devolved into a minor curse. Society as a whole have become so desensitized to the word hell, as is the case with other forms of slander, that it has lost it's meaning and identity.

Fourth, the original usage and meaning of Hell was a place of suffering and punishment in an afterlife, Hell was to depict what a life of evil would lead to, most likely as an institution geared towards frightening citizens away from the sins and crimes against the fellow human that were of the day and age. An act of social deviance, severe enough, would result with the doer being cast down into the fiery abyss to suffer worse than anything imaginable.

In conclusion, I remain impartial to the ever changing meaning and usage of Hell because after all one of the things I really enjoy about the English language is how new words are perpetually incorporated, discontinued and evolved over the years. Like a constant machine of consumption the English have throughout history explored new lands and when a new land was already inhabited the English would conquer but in doing so they would incorporate the native ways of life into the newly created lands and in turn bring parts of the new ways of life back to England and spread to other colonies, and thus further advancing the greater whole. Now in this age of technological advancements we have already seen changes in language, such as LOL, and we will continue to see many more advances, whether they truly be conceived as a step forward or back. Or consider the utter complexity of language, taking this into consideration that perhaps language cannot be simply conceived in the 2D (two-dimensional) and more critical analysis is required to understand the full complexity of it all.

Prisoner's Dilemma

Jul 4, 2012

I stumbled across this concept called Prisoner's Dilemma which I am finding to be extremely interesting and wanted to share.


The prisoner's dilemma is a canonical example of a game analyzed in game theory that shows why two individuals might not cooperate, even if it appears that it is in their best interest to do so. It was originally framed by Merrill Flood and Melvin Dresher working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence payoffs and gave it the "prisoner's dilemma" name (Poundstone, 1992). A classic example of the prisoner's dilemma (PD) is presented as follows:

Two men are arrested, but the police do not possess enough information for a conviction. Following the separation of the two men, the police offer both a similar deal—if one testifies against his partner (defects/betrays), and the other remains silent (uncooperative), the betrayer goes free and the one that remains silent receives the full one-year sentence. If both remain silent, both are sentenced to only one month in jail for a minor charge. If each 'rats out' the other, each receives a three-month sentence. Each prisoner must choose either to betray or remain silent; the decision of each is kept quiet. What should they do?

If it is supposed here that each player is only concerned with lessening his time in jail, the game becomes a non-zero sum game where the two players may either assist or betray the other. In the game, the sole worry of the prisoners seems to be increasing his own reward. The interesting symmetry of this problem is that the logical decision leads each to betray the other, even though their individual ‘prize’ would be greater if they cooperated.

In the regular version of this game, collaboration is dominated by betrayal, and as a result, the only possible outcome of the game is for both prisoners to betray the other. Regardless of what the other prisoner chooses, one will always gain a greater payoff by betraying the other. Because betrayal is always more beneficial than cooperation, all objective prisoners would seemingly betray the other.

In the extended form game, the game is played over and over, and consequently, both prisoners continuously have an opportunity to penalize the other for the previous decision. If the number of times the game will be played is known, the finite aspect of the game means that by backward induction, the two prisoners will betray each other repeatedly.

In casual usage, the label "prisoner's dilemma" may be applied to situations not strictly matching the formal criteria of the classic or iterative games, for instance, those in which two entities could gain important benefits from cooperating or suffer from the failure to do so, but find it merely difficult or expensive, not necessarily impossible, to coordinate their activities to achieve cooperation.

Strategy for the classic prisoners' dilemma

The normal game is shown below:

Prisoner B stays silent (cooperates) Prisoner B betrays (defects)
Prisoner A stays silent (cooperates) Each serves 1 month Prisoner A: 1 year
Prisoner B: goes free
Prisoner A betrays (defects) Prisoner A: goes free
Prisoner B: 1 year
Each serves 3 months

Here, regardless of what the other decides, each prisoner gets a higher pay-off by betraying the other. For example, Prisoner A can (according to the payoffs above) state that no matter what prisoner B chooses, prisoner A is better off 'ratting him out' (defecting) than staying silent (cooperating). As a result, based on the payoffs above, prisoner A should logically betray him. The game is symmetric, so Prisoner B should act the same way, Since both rationally decide to defect, each receives a lower reward than if both were to stay quiet. Traditional game theory results in both players being worse off than if each chose to lessen the sentence of his accomplice at the cost of spending more time in jail himself.

The iterated prisoners' dilemma

If two players play prisoners' dilemma more than once in succession and they remember previous actions of their opponent and change their strategy accordingly, the game is called iterated prisoners' dilemma.

The iterated prisoners' dilemma game is fundamental to certain theories of human cooperation and trust. On the assumption that the game can model transactions between two people requiring trust, cooperative behaviour in populations may be modeled by a multi-player, iterated, version of the game. It has, consequently, fascinated many scholars over the years. In 1975, Grofman and Pool estimated the count of scholarly articles devoted to it at over 2,000. The iterated prisoners' dilemma has also been referred to as the "Peace-War game".

If the game is played exactly N times and both players know this, then it is always game theoretically optimal to defect in all rounds. The only possible Nash equilibrium is to always defect. The proof is inductive: one might as well defect on the last turn, since the opponent will not have a chance to punish the player. Therefore, both will defect on the last turn. Thus, the player might as well defect on the second-to-last turn, since the opponent will defect on the last no matter what is done, and so on. The same applies if the game length is unknown but has a known upper limit.

Unlike the standard prisoners' dilemma, in the iterated prisoners' dilemma the defection strategy is counter-intuitive and fails badly to predict the behavior of human players. Within standard economic theory, though, this is the only correct answer. The superrational strategy in the iterated prisoners' dilemma with fixed N is to cooperate against a superrational opponent, and in the limit of large N, experimental results on strategies agree with the superrational version, not the game-theoretic rational one.

For cooperation to emerge between game theoretic rational players, the total number of rounds N must be random, or at least unknown to the players. In this case always defect may no longer be a strictly dominant strategy, only a Nash equilibrium. Amongst results shown by Robert Aumann in a 1959 paper, rational players repeatedly interacting for indefinitely long games can sustain the cooperative outcome.

Strategy for the iterated prisoners' dilemma

Interest in the iterated prisoners' dilemma (IPD) was kindled by Robert Axelrod in his book The Evolution of Cooperation (1984). In it he reports on a tournament he organized of the N step prisoners' dilemma (with N fixed) in which participants have to choose their mutual strategy again and again, and have memory of their previous encounters. Axelrod invited academic colleagues all over the world to devise computer strategies to compete in an IPD tournament. The programs that were entered varied widely in algorithmic complexity, initial hostility, capacity for forgiveness, and so forth.

Axelrod discovered that when these encounters were repeated over a long period of time with many players, each with different strategies, greedy strategies tended to do very poorly in the long run while more altruistic strategies did better, as judged purely by self-interest. He used this to show a possible mechanism for the evolution of altruistic behaviour from mechanisms that are initially purely selfish, by natural selection.

The best deterministic strategy was found to be tit for tat, which Anatol Rapoport developed and entered into the tournament. It was the simplest of any program entered, containing only four lines of BASIC, and won the contest. The strategy is simply to cooperate on the first iteration of the game; after that, the player does what his or her opponent did on the previous move. Depending on the situation, a slightly better strategy can be "tit for tat with forgiveness." When the opponent defects, on the next move, the player sometimes cooperates anyway, with a small probability (around 1–5%). This allows for occasional recovery from getting trapped in a cycle of defections. The exact probability depends on the line-up of opponents.

By analysing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to be successful.

Nice
The most important condition is that the strategy must be "nice", that is, it will not defect before its opponent does (this is sometimes referred to as an "optimistic" algorithm). Almost all of the top-scoring strategies were nice; therefore a purely selfish strategy will not "cheat" on its opponent, for purely self-interested reasons first.
Retaliating
However, Axelrod contended, the successful strategy must not be a blind optimist. It must sometimes retaliate. An example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as "nasty" strategies will ruthlessly exploit such players.
Forgiving
Successful strategies must also be forgiving. Though players will retaliate, they will once again fall back to cooperating if the opponent does not continue to defect. This stops long runs of revenge and counter-revenge, maximizing points.
Non-envious
The last quality is being non-envious, that is not striving to score more than the opponent (note that a "nice" strategy can never score more than the opponent).

The optimal (points-maximizing) strategy for the one-time PD game is simply defection; as explained above, this is true whatever the composition of opponents may be. However, in the iterated-PD game the optimal strategy depends upon the strategies of likely opponents, and how they will react to defections and cooperations. For example, consider a population where everyone defects every time, except for a single individual following the tit for tat strategy. That individual is at a slight disadvantage because of the loss on the first turn. In such a population, the optimal strategy for that individual is to defect every time. In a population with a certain percentage of always-defectors and the rest being tit for tat players, the optimal strategy for an individual depends on the percentage, and on the length of the game.

A strategy called Pavlov (an example of Win-Stay, Lose-Switch) cooperates at the first iteration and whenever the player and co-player did the same thing at the previous iteration; Pavlov defects when the player and co-player did different things at the previous iteration. For a certain range of parameters, Pavlov beats all other strategies by giving preferential treatment to co-players which resemble Pavlov.

Deriving the optimal strategy is generally done in two ways:

Bayesian Nash Equilibrium: If the statistical distribution of opposing strategies can be determined (e.g. 50% tit for tat, 50% always cooperate) an optimal counter-strategy can be derived analytically.

Monte Carlo simulations of populations have been made, where individuals with low scores die off, and those with high scores reproduce (a genetic algorithm for finding an optimal strategy). The mix of algorithms in the final population generally depends on the mix in the initial population. The introduction of mutation (random variation during reproduction) lessens the dependency on the initial population; empirical experiments with such systems tend to produce tit for tat players (see for instance Chess 1988), but there is no analytic proof that this will always occur.

Although tit for tat is considered to be the most robust basic strategy, a team from Southampton University in England (led by Professor Nicholas Jennings  and consisting of Rajdeep Dash, Sarvapali Ramchurn, Alex Rogers, Perukrishnen Vytelingum) introduced a new strategy at the 20th-anniversary iterated prisoners' dilemma competition, which proved to be more successful than tit for tat. This strategy relied on cooperation between programs to achieve the highest number of points for a single program. The University submitted 60 programs to the competition, which were designed to recognize each other through a series of five to ten moves at the start. Once this recognition was made, one program would always cooperate and the other would always defect, assuring the maximum number of points for the defector. If the program realized that it was playing a non-Southampton player, it would continuously defect in an attempt to minimize the score of the competing program. As a result, this strategy ended up taking the top three positions in the competition, as well as a number of positions towards the bottom.

This strategy takes advantage of the fact that multiple entries were allowed in this particular competition, and that the performance of a team was measured by that of the highest-scoring player (meaning that the use of self-sacrificing players was a form of minmaxing). In a competition where one has control of only a single player, tit for tat is certainly a better strategy. Because of this new rule, this competition also has little theoretical significance when analysing single agent strategies as compared to Axelrod's seminal tournament. However, it provided the framework for analysing how to achieve cooperative strategies in multi-agent frameworks, especially in the presence of noise. In fact, long before this new-rules tournament was played, Richard Dawkins in his book The Selfish Gene pointed out the possibility of such strategies winning if multiple entries were allowed, but remarked that most probably Axelrod would not have allowed them if they had been submitted. It also relies on circumventing rules about the prisoners' dilemma in that there is no communication allowed between the two players. When the Southampton programs engage in an opening "ten move dance" to recognize one another, this only reinforces just how valuable communication can be in shifting the balance of the game.

The Prisoner's Dilemma simplified

Summary


The “dilemma” faced is that, whatever the other does, each is better off confessing than remaining silent. But the outcome obtained when both confess is worse for each than the outcome they would have obtained had both remained silent. A common view is that the puzzle illustrates a conflict between individual and group rationality. A group whose members pursue rational self-interest may all end up worse off than a group whose members act contrary to rational self-interest. More generally, if the payoffs are not assumed to represent self-interest, a group whose members rationally pursue any goals may all meet less success than if they had not rationally pursued their goals individually. A closely related view is that the prisoner's dilemma game and its multi-player generalizations model familiar situations in which it is difficult to get rational, selfish agents to cooperate for their common good. Much of the contemporary literature has focused on identifying conditions under which players would or should make the “cooperative” move corresponding to remaining silent. A slightly different interpretation takes the game to represent a choice between selfish behavior and socially desirable altruism. The move corresponding to confession benefits the actor, no matter what the other does, while the move corresponding to silence benefits the other player no matter what that player does. Benefiting oneself is not always wrong, of course, and benefiting others at the expense of oneself is not always morally required, but in the prisoner's dilemma game both players prefer the outcome with the altruistic moves to that with the selfish moves. This observation has led David Gauthier and others to take the Prisoner's Dilemma to say something important about the nature of morality.

Separation

Jun 24, 2012

The concept of the separation of church and state refers to the distance in the relationship between organized religion and the nation state.

The concept of separation has been adopted in a number of countries, to varying degrees depending on the applicable legal structures and prevalent views toward the proper role of religion in society. A similar but typically stricter principle of laïcité has been applied in France and Turkey, while some socially secularized countries such as Norway, Denmark and the UK have maintained constitutional recognition of an official state religion. The concept parallels various other international social and political ideas, including secularism, disestablishment, religious liberty, and religious pluralism. Whitman (2009) observes that in many European countries, the state has, over the centuries, taken over the social roles of the church, leading to a generally secularized public sphere.

The degree of separation varies from total separation mandated by a constitution, to an official religion with total prohibition of the practice of any other religion, as in the Maldives.

The model of separating church and state works, and applying that structural logic to practices within state needs to be done. For example, separating males and females will better everyone's lives, in the short and even more in the long term quality of life. Both factions of course will be given equal rights and resources, it's a matter of improving the environment of all human beings.

For starters schools will need to be remodeled, by either having schools split into the respective genders, with each having it's own facilities and fulling furnished institutions. Or having separate schools entirely. Moving outside of school, the media will be reformatted, each faction will have it's own specially programmed content. No longer will confrontation over which program should be on or dissatisfying subject matter viewed be of concern.

Well organized media content shall be managed by all authorities associated; government sector, educational, developmental and parental. For example sites will be analyzed mercilessly by both human involvement and sophisticated web-crawlers. Sites most relevant and suitable for an individual will be available and those deemed unethical or restricted will never even be known to the individual.

Youth of this future will be privileged to grow up in a world were the dangers of corrupting influences are unimaginable. A child will be free to expand it's time and energy on further expanding it's intelligence, understanding and world.

Talk is all fun and good but it wont chance the world. The question at hand is how are we to build this magnificent world for future generations? If these advanced children would be here now they would be able to devise arrangements in bettering the world, but in order for them to begin we must start the evolution. The paradox is similar to that of the chicken and the egg, an egg needs a chicken to lay it, but a chicken needs to be an egg, so which came first. I think for the obstacle of the new world, we must be patient and fearless because maybe years of work and waiting are required to create the new breed with the resources we have available at hand, but it will create a new future were they can do the same and  thus create a future greater than their own. Ultimately forming the foundations of evolution and a future superior to any that we can conceive in our present day.

By Design?

Jun 13, 2012

Copper Pennies + Clear Resin = Beautiful Floor.
 

If you want to try this: Save this picture and go to your local Home Improvement store and ask them what type of Clear Resin would work the best and what kind of under-lament would be needed. In this picture they had a concrete floor to work with. Yes, it would be cheaper than the average floor, if you did the work yourself. The price, including the cost of the pennies would probably range from $2.50 to $3.50 a square foot. 


Using pennies would be cheaper than tiles, easier to work with, flush and level. As well as the finished floor surface looking fantastic.



While your considering redecoration, take a look at this creative storage space solution.

Using ceiling rafter in the garage is common practice for home owners. But installing a simple sliding rack onto the ceiling is a great idea. Replacing the conventional basement/garage storage method of piling boxes and storage containers, cramming far too many things into a closet, never to open it again in fear of a land slide.


Utilizing the unused ceiling space is a move in the right direction toward creative storage space solutions that our perpetually accumulating and hoarding society desperately needs.


But, is the right course of action to improve personal storage space practices? Wouldn't this just allow people to hoard even worse? 

Instead of having boxes stacked from the floor to the ceiling, is having them from the ceiling to the floor better in any way? It might even be worse because a ceiling can only support so much weight before a bolt comes loose and containers fall on someone. 


However, back to the bigger issue at hand, the majority of modern societies have developed into an ever consuming cluster of individuals. Some would call it capitalism, however capitalism is merely a small part of the greater picture.


There are many triggers and causes that lead individuals to accumulating an excess of possessions. Nature, nurture, psychological, biological, impulsive buying, special offers, fear of not having enough and so many more.


There are so many levels of degree to which, over accumulation, hoarding, obsession, etc. can be analyses, debated, discussed and repeated. Every possible contributing factor, psychological, sociological and anthropological can to be analyzed. 

In the end serious professional expertise is needed to gain a better understanding of why people simply, want more. As countless other questions raise controversy and debate. Contributions from many fields will be supported. But ultimately there will be no perfect answer, no solution, at best we can work towards managing both the physical problems of spacial restrictions and the psychological perplexity of our reach extending our grasp.




Meaning Behind The Doctors Regeneration

Jun 12, 2012




As part of the 'Doctor Who' franchise, when ever the Doctor is near death his body regenerates, because he is neither human nor from earth. He is a Time Lord from an ancient planet called Gallifrey. The Doctor has been the last Time Lord in the universe since the Time Wars of which he was the sole survivor.

I see the Doctor's character trait of regeneration as something that goes beyond a classical cellular regeneration that repairs all damage to the flesh and body. When a Time Lord regenerates, no amount of damage is irreparable. But it goes beyond just replacing the damaged cells, the Doctor becomes an entirely new person, with a new face, personality, morals, views, even his Tardis changes. The only constant is the name, Doctor, but even this is not truth because the Doctor's name has never been revealed in the 50 years and 11 incarnations of the Doctor. However, he whispered his name to Rose when they were saying good bye. Also the written form of the Doctor's name has been revealed but it is in the language of the Time Lords so who knows what it means.

So in short, the Doctor is immortal, has been alive for almost 1000 years, sole survivor of the fiercest war of all time, a genius that cannot be rivaled, able to travel to any point in time and space at will and is utterly unstoppable.

So what does this all mean? To me, this means The Doctor is god. He matches the criteria;
  • The Doctor is all good Omnibenevolence
  • The Doctor is all knowing Omniscience
  • The Doctor is all powerful Omnipotence
And he is a being than which none greater can be conceived, which is supported by the Ontological argument.
The Doctor: he who, the Scriptures of Moffat say, brings salvation wherever he goes – transforming the lives of whoever he meets through kindness and sacrifice. Ridiculous? Well, maybe, but such an analogy, for me at least, has always seemed perfectly apt – not only in the context of the show or as a fan, but as an example of what is a wider social shift. That is: heroes of popular culture becoming modern figures of worship.

Even the main premise of the show is built upon the concept of existential salvation: the idea that one day this wonderful being will drop out of the sky to rescue us from the crippling tedium of adult life, to make us believe that there is more to existence than work, bills and over-thinking popular tea-time television shows.