|
{"context_length":4096,"context_depth":0.0,"secret_code":"n\/a","copy":0,"context":"_(This essay is derived from a talk at Google.)_\n\nA few weeks ago I found to my surprise that I'd been granted four [patents](http:\/\/paulgraham.infogami.com\/blog\/morepatents). This was all the more surprising because I'd only applied for three. The patents aren't mine, of course. They were assigned to Viaweb, and became Yahoo's when they bought us. But the news set me thinking about the question of software patents generally.\n\nPatents are a hard problem. I've had to advise most of the startups we've funded about them, and despite years of experience I'm still not always sure I'm giving the right advice.\n\nOne thing I do feel pretty certain of is that if you're against software patents, you're against patents in general. Gradually our machines consist more and more of software. Things that used to be done with levers and cams and gears are now done with loops and trees and closures. There's nothing special about physical embodiments of control systems that should make them patentable, and the software equivalent not.\n\nUnfortunately, patent law is inconsistent on this point. Patent law in most countries says that algorithms aren't patentable. This rule is left over from a time when \"algorithm\" meant something like the Sieve of Eratosthenes. In 1800, people could not see as readily as we can that a great many patents on mechanical objects were really patents on the algorithms they embodied.\n\nPatent lawyers still have to pretend that's what they're doing when they patent algorithms. You must not use the word \"algorithm\" in the title of a patent application, just as you must not use the word \"essays\" in the title of a book. If you want to patent an algorithm, you have to frame it as a computer system executing that algorithm. Then it's mechanical; phew. The default euphemism for algorithm is \"system and method.\" Try a patent search for that phrase and see how many results you get.\n\nSince software patents are no different from hardware patents, people who say \"software patents are evil\" are saying simply \"patents are evil.\" So why do so many people complain about software patents specifically?\n\nI think the problem is more with the patent office than the concept of software patents. Whenever software meets government, bad things happen, because software changes fast and government changes slow. The patent office has been overwhelmed by both the volume and the novelty of applications for software patents, and as a result they've made a lot of mistakes.\n\nThe most common is to grant patents that shouldn't be granted. To be patentable, an invention has to be more than new. It also has to be non-obvious. And this, especially, is where the USPTO has been dropping the ball. Slashdot has an icon that expresses the problem vividly: a knife and fork with the words \"patent pending\" superimposed.\n\nThe scary thing is, this is the _only_ icon they have for patent stories. Slashdot readers now take it for granted that a story about a patent will be about a bogus patent. That's how bad the problem has become.\n\nThe problem with Amazon's notorious one-click patent, for example, is not that it's a software patent, but that it's obvious. Any online store that kept people's shipping addresses would have implemented this. The reason Amazon did it first was not that they were especially smart, but because they were one of the earliest sites with enough clout to force customers to log in before they could buy something. \\[[1](#f1n)\\]\n\nWe, as hackers, know the USPTO is letting people patent the knives and forks of our world. The problem is, the USPTO are not hackers. They're probably good at judging new inventions for casting steel or grinding lenses, but they don't understand software yet.\n\nAt this point an optimist would be tempted to add \"but they will eventually.\" Unfortunately that might not be true. The problem with software patents is an instance of a more general one: the patent office takes a while to understand new technology."} |
|
{"context_length":4096,"context_depth":0.0,"secret_code":"n\/a","copy":0,"context":"An investor wants to give you money for a certain percentage of your startup. Should you take it? You're about to hire your first employee. How much stock should you give him?\n\nThese are some of the hardest questions founders face. And yet both have the same answer:\n\n1\/(1 - n)\n\nWhenever you're trading stock in your company for anything, whether it's money or an employee or a deal with another company, the test for whether to do it is the same. You should give up n% of your company if what you trade it for improves your average outcome enough that the (100 - n)% you have left is worth more than the whole company was before.\n\nFor example, if an investor wants to buy half your company, how much does that investment have to improve your average outcome for you to break even? Obviously it has to double: if you trade half your company for something that more than doubles the company's average outcome, you're net ahead. You have half as big a share of something worth more than twice as much.\n\nIn the general case, if n is the fraction of the company you're giving up, the deal is a good one if it makes the company worth more than 1\/(1 - n).\n\nFor example, suppose Y Combinator offers to fund you in return for 7% of your company. In this case, n is .07 and 1\/(1 - n) is 1.075. So you should take the deal if you believe we can improve your average outcome by more than 7.5%. If we improve your outcome by 10%, you're net ahead, because the remaining .93 you hold is worth .93 x 1.1 = 1.023. \\[[1](#f1n)\\]\n\nOne of the things the equity equation shows us is that, financially at least, taking money from a top VC firm can be a really good deal. Greg Mcadoo from Sequoia recently said at a YC dinner that when Sequoia invests alone they like to take about 30% of a company. 1\/.7 = 1.43, meaning that deal is worth taking if they can improve your outcome by more than 43%. For the average startup, that would be an extraordinary bargain. It would improve the average startup's prospects by more than 43% just to be able to _say_ they were funded by Sequoia, even if they never actually got the money.\n\nThe reason Sequoia is such a good deal is that the percentage of the company they take is artificially low. They don't even try to get market price for their investment; they limit their holdings to leave the founders enough stock to feel the company is still theirs.\n\nThe catch is that Sequoia gets about 6000 business plans a year and funds about 20 of them, so the odds of getting this great deal are 1 in 300. The companies that make it through are not average startups.\n\nOf course, there are other factors to consider in a VC deal. It's never just a straight trade of money for stock. But if it were, taking money from a top firm would generally be a bargain.\n\nYou can use the same formula when giving stock to employees, but it works in the other direction. If i is the average outcome for the company with the addition of some new person, then they're worth n such that i = 1\/(1 - n). Which means n = (i - 1)\/i.\n\nFor example, suppose you're just two founders and you want to hire an additional hacker who's so good you feel he'll increase the average outcome of the whole company by 20%. n = (1.2 - 1)\/1.2 = .167. So you'll break even if you trade 16.7% of the company for him.\n\nThat doesn't mean 16.7% is the right amount of stock to give him. Stock is not the only cost of hiring someone: there's usually salary and overhead as well. And if the company merely breaks even on the deal, there's no reason to do it.\n\nI think to translate salary and overhead into stock you should multiply the annual rate by about 1.5. Most startups grow fast or die; if you die you don't have to pay the guy, and if you grow fast you'll be paying next year's salary out of next year's valuation, which should be 3x this year's. If your valuation grows 3x a year, the total cost in stock of a new hire's salary and overhead is 1.5 years' cost at the present valuation."} |
|
{"context_length":4096,"context_depth":0.0,"secret_code":"n\/a","copy":0,"context":"_(This essay is from the introduction to_ [On Lisp](onlisp.html)_.)_\n\nIt's a long-standing principle of programming style that the functional elements of a program should not be too large. If some component of a program grows beyond the stage where it's readily comprehensible, it becomes a mass of complexity which conceals errors as easily as a big city conceals fugitives. Such software will be hard to read, hard to test, and hard to debug.\n\nIn accordance with this principle, a large program must be divided into pieces, and the larger the program, the more it must be divided. How do you divide a program? The traditional approach is called _top-down design:_ you say \"the purpose of the program is to do these seven things, so I divide it into seven major subroutines. The first subroutine has to do these four things, so it in turn will have four of its own subroutines,\" and so on. This process continues until the whole program has the right level of granularity-- each part large enough to do something substantial, but small enough to be understood as a single unit.\n\nExperienced Lisp programmers divide up their programs differently. As well as top-down design, they follow a principle which could be called _bottom-up design_\\-- changing the language to suit the problem. In Lisp, you don't just write your program down toward the language, you also build the language up toward your program. As you're writing a program you may think \"I wish Lisp had such-and-such an operator.\" So you go and write it. Afterward you realize that using the new operator would simplify the design of another part of the program, and so on. Language and program evolve together. Like the border between two warring states, the boundary between language and program is drawn and redrawn, until eventually it comes to rest along the mountains and rivers, the natural frontiers of your problem. In the end your program will look as if the language had been designed for it. And when language and program fit one another well, you end up with code which is clear, small, and efficient.\n\nIt's worth emphasizing that bottom-up design doesn't mean just writing the same program in a different order. When you work bottom-up, you usually end up with a different program. Instead of a single, monolithic program, you will get a larger language with more abstract operators, and a smaller program written in it. Instead of a lintel, you'll get an arch.\n\nIn typical code, once you abstract out the parts which are merely bookkeeping, what's left is much shorter; the higher you build up the language, the less distance you will have to travel from the top down to it. This brings several advantages:\n\n1. By making the language do more of the work, bottom-up design yields programs which are smaller and more agile. A shorter program doesn't have to be divided into so many components, and fewer components means programs which are easier to read or modify. Fewer components also means fewer connections between components, and thus less chance for errors there. As industrial designers strive to reduce the number of moving parts in a machine, experienced Lisp programmers use bottom-up design to reduce the size and complexity of their programs.\n\n2. Bottom-up design promotes code re-use. When you write two or more programs, many of the utilities you wrote for the first program will also be useful in the succeeding ones. Once you've acquired a large substrate of utilities, writing a new program can take only a fraction of the effort it would require if you had to start with raw Lisp.\n\n3. Bottom-up design makes programs easier to read. An instance of this type of abstraction asks the reader to understand a general-purpose operator; an instance of functional abstraction asks the reader to understand a special-purpose subroutine. \\[1\\]\n\n4. Because it causes you always to be on the lookout for patterns in your code, working bottom-up helps to clarify your ideas about the design of your program."} |
|
{"context_length":4096,"context_depth":0.0,"secret_code":"n\/a","copy":0,"context":"_(This essay is derived from a guest lecture in Sam Altman's [startup class](http:\/\/startupclass.samaltman.com\/) at Stanford. It's intended for college students, but much of it is applicable to potential founders at other ages.)_\n\nOne of the advantages of having kids is that when you have to give advice, you can ask yourself \"what would I tell my own kids?\" My kids are little, but I can imagine what I'd tell them about startups if they were in college, and that's what I'm going to tell you.\n\nStartups are very counterintuitive. I'm not sure why. Maybe it's just because knowledge about them hasn't permeated our culture yet. But whatever the reason, starting a startup is a task where you can't always trust your instincts.\n\nIt's like skiing in that way. When you first try skiing and you want to slow down, your instinct is to lean back. But if you lean back on skis you fly down the hill out of control. So part of learning to ski is learning to suppress that impulse. Eventually you get new habits, but at first it takes a conscious effort. At first there's a list of things you're trying to remember as you start down the hill.\n\nStartups are as unnatural as skiing, so there's a similar list for startups. Here I'm going to give you the first part of it \u2014 the things to remember if you want to prepare yourself to start a startup.\n\n**Counterintuitive**\n\nThe first item on it is the fact I already mentioned: that startups are so weird that if you trust your instincts, you'll make a lot of mistakes. If you know nothing more than this, you may at least pause before making them.\n\nWhen I was running Y Combinator I used to joke that our function was to tell founders things they would ignore. It's really true. Batch after batch, the YC partners warn founders about mistakes they're about to make, and the founders ignore them, and then come back a year later and say \"I wish we'd listened.\"\n\nWhy do the founders ignore the partners' advice? Well, that's the thing about counterintuitive ideas: they contradict your intuitions. They seem wrong. So of course your first impulse is to disregard them. And in fact my joking description is not merely the curse of Y Combinator but part of its raison d'etre. If founders' instincts already gave them the right answers, they wouldn't need us. You only need other people to give you advice that surprises you. That's why there are a lot of ski instructors and not many running instructors. \\[[1](#f1n)\\]\n\nYou can, however, trust your instincts about people. And in fact one of the most common mistakes young founders make is not to do that enough. They get involved with people who seem impressive, but about whom they feel some misgivings personally. Later when things blow up they say \"I knew there was something off about him, but I ignored it because he seemed so impressive.\"\n\nIf you're thinking about getting involved with someone \u2014 as a cofounder, an employee, an investor, or an acquirer \u2014 and you have misgivings about them, trust your gut. If someone seems slippery, or bogus, or a jerk, don't ignore it.\n\nThis is one case where it pays to be self-indulgent. Work with people you genuinely like, and you've known long enough to be sure.\n\n**Expertise**\n\nThe second counterintuitive point is that it's not that important to know a lot about startups. The way to succeed in a startup is not to be an expert on startups, but to be an expert on your users and the problem you're solving for them. Mark Zuckerberg didn't succeed because he was an expert on startups. He succeeded despite being a complete noob at startups, because he understood his users really well.\n\nIf you don't know anything about, say, how to raise an angel round, don't feel bad on that account. That sort of thing you can learn when you need to, and forget after you've done it.\n\nIn fact, I worry it's not merely unnecessary to learn in great detail about the mechanics of startups, but possibly somewhat dangerous."} |
|
{"context_length":4096,"context_depth":0.0,"secret_code":"n\/a","copy":0,"context":"I've read Villehardouin's chronicle of the Fourth Crusade at least two times, maybe three. And yet if I had to write down everything I remember from it, I doubt it would amount to much more than a page. Multiply this times several hundred, and I get an uneasy feeling when I look at my bookshelves. What use is it to read all these books if I remember so little from them?\n\nA few months ago, as I was reading Constance Reid's excellent biography of Hilbert, I figured out if not the answer to this question, at least something that made me feel better about it. She writes:\n\n> Hilbert had no patience with mathematical lectures which filled the students with facts but did not teach them how to frame a problem and solve it. He often used to tell them that \"a perfect formulation of a problem is already half its solution.\"\n\nThat has always seemed to me an important point, and I was even more convinced of it after hearing it confirmed by Hilbert.\n\nBut how had I come to believe in this idea in the first place? A combination of my own experience and other things I'd read. None of which I could at that moment remember! And eventually I'd forget that Hilbert had confirmed it too. But my increased belief in the importance of this idea would remain something I'd learned from this book, even after I'd forgotten I'd learned it.\n\nReading and experience train your model of the world. And even if you forget the experience or what you read, its effect on your model of the world persists. Your mind is like a compiled program you've lost the source of. It works, but you don't know why.\n\nThe place to look for what I learned from Villehardouin's chronicle is not what I remember from it, but my mental models of the crusades, Venice, medieval culture, siege warfare, and so on. Which doesn't mean I couldn't have read more attentively, but at least the harvest of reading is not so miserably small as it might seem.\n\nThis is one of those things that seem obvious in retrospect. But it was a surprise to me and presumably would be to anyone else who felt uneasy about (apparently) forgetting so much they'd read.\n\nRealizing it does more than make you feel a little better about forgetting, though. There are specific implications.\n\nFor example, reading and experience are usually \"compiled\" at the time they happen, using the state of your brain at that time. The same book would get compiled differently at different points in your life. Which means it is very much worth reading important books multiple times. I always used to feel some misgivings about rereading books. I unconsciously lumped reading together with work like carpentry, where having to do something again is a sign you did it wrong the first time. Whereas now the phrase \"already read\" seems almost ill-formed.\n\nIntriguingly, this implication isn't limited to books. Technology will increasingly make it possible to relive our experiences. When people do that today it's usually to enjoy them again (e.g. when looking at pictures of a trip) or to find the origin of some bug in their compiled code (e.g. when Stephen Fry succeeded in remembering the childhood trauma that prevented him from singing). But as technologies for recording and playing back your life improve, it may become common for people to relive experiences without any goal in mind, simply to learn from them again as one might when rereading a book.\n\nEventually we may be able not just to play back experiences but also to index and even edit them. So although not knowing how you know things may seem part of being human, it may not be.\n\n**Thanks** to Sam Altman, Jessica Livingston, and Robert Morris for reading drafts of this.The most damaging thing you learned in school wasn't something you learned in any specific class. It was learning to get good grades.\n\nWhen I was in college, a particularly earnest philosophy grad student once told me that he never cared what grade he got in a class, only what he learned in it."} |
|
{"context_length":4096,"context_depth":0.0,"secret_code":"n\/a","copy":0,"context":"There are two distinct ways to be politically moderate: on purpose and by accident. Intentional moderates are trimmers, deliberately choosing a position mid-way between the extremes of right and left. Accidental moderates end up in the middle, on average, because they make up their own minds about each question, and the far right and far left are roughly equally wrong.\n\nYou can distinguish intentional from accidental moderates by the distribution of their opinions. If the far left opinion on some matter is 0 and the far right opinion 100, an intentional moderate's opinion on every question will be near 50. Whereas an accidental moderate's opinions will be scattered over a broad range, but will, like those of the intentional moderate, average to about 50.\n\nIntentional moderates are similar to those on the far left and the far right in that their opinions are, in a sense, not their own. The defining quality of an ideologue, whether on the left or the right, is to acquire one's opinions in bulk. You don't get to pick and choose. Your opinions about taxation can be predicted from your opinions about sex. And although intentional moderates might seem to be the opposite of ideologues, their beliefs (though in their case the word \"positions\" might be more accurate) are also acquired in bulk. If the median opinion shifts to the right or left, the intentional moderate must shift with it. Otherwise they stop being moderate.\n\nAccidental moderates, on the other hand, not only choose their own answers, but choose their own questions. They may not care at all about questions that the left and right both think are terribly important. So you can only even measure the politics of an accidental moderate from the intersection of the questions they care about and those the left and right care about, and this can sometimes be vanishingly small.\n\nIt is not merely a manipulative rhetorical trick to say \"if you're not with us, you're against us,\" but often simply false.\n\nModerates are sometimes derided as cowards, particularly by the extreme left. But while it may be accurate to call intentional moderates cowards, openly being an accidental moderate requires the most courage of all, because you get attacked from both right and left, and you don't have the comfort of being an orthodox member of a large group to sustain you.\n\nNearly all the most impressive people I know are accidental moderates. If I knew a lot of professional athletes, or people in the entertainment business, that might be different. Being on the far left or far right doesn't affect how fast you run or how well you sing. But someone who works with ideas has to be independent-minded to do it well.\n\nOr more precisely, you have to be independent-minded about the ideas you work with. You could be mindlessly doctrinaire in your politics and still be a good mathematician. In the 20th century, a lot of very smart people were Marxists \ufffd just no one who was smart about the subjects Marxism involves. But if the ideas you use in your work intersect with the politics of your time, you have two choices: be an accidental moderate, or be mediocre.\n\n**Notes**\n\n\\[1\\] It's possible in theory for one side to be entirely right and the other to be entirely wrong. Indeed, ideologues must always believe this is the case. But historically it rarely has been.\n\n\\[2\\] For some reason the far right tend to ignore moderates rather than despise them as backsliders. I'm not sure why. Perhaps it means that the far right is less ideological than the far left. Or perhaps that they are more confident, or more resigned, or simply more disorganized. I just don't know.\n\n\\[3\\] Having heretical opinions doesn't mean you have to express them openly. It may be [easier to have them](say.html) if you don't.\n\n**Thanks** to Austen Allred, Trevor Blackwell, Patrick Collison, Jessica Livingston, Amjad Masad, Ryan Petersen, and Harj Taggar for reading drafts of this."} |
|
{"context_length":4096,"context_depth":0.0,"secret_code":"n\/a","copy":0,"context":"Great cities attract ambitious people. You can sense it when you walk around one. In a hundred subtle ways, the city sends you a message: you could do more; you should try harder.\n\nThe surprising thing is how different these messages can be. New York tells you, above all: you should make more money. There are other messages too, of course. You should be hipper. You should be better looking. But the clearest message is that you should be richer.\n\nWhat I like about Boston (or rather Cambridge) is that the message there is: you should be smarter. You really should get around to reading all those books you've been meaning to.\n\nWhen you ask what message a city sends, you sometimes get surprising answers. As much as they respect brains in Silicon Valley, the message the Valley sends is: you should be more powerful.\n\nThat's not quite the same message New York sends. Power matters in New York too of course, but New York is pretty impressed by a billion dollars even if you merely inherited it. In Silicon Valley no one would care except a few real estate agents. What matters in Silicon Valley is how much effect you have on the world. The reason people there care about Larry and Sergey is not their wealth but the fact that they control Google, which affects practically everyone.\n\n\\_\\_\\_\\_\\_\n\nHow much does it matter what message a city sends? Empirically, the answer seems to be: a lot. You might think that if you had enough strength of mind to do great things, you'd be able to transcend your environment. Where you live should make at most a couple percent difference. But if you look at the historical evidence, it seems to matter more than that. Most people who did great things were clumped together in a few places where that sort of thing was done at the time.\n\nYou can see how powerful cities are from something I wrote about [earlier](taste.html): the case of the Milanese Leonardo. Practically every fifteenth century Italian painter you've heard of was from Florence, even though Milan was just as big. People in Florence weren't genetically different, so you have to assume there was someone born in Milan with as much natural ability as Leonardo. What happened to him?\n\nIf even someone with the same natural ability as Leonardo couldn't beat the force of environment, do you suppose you can?\n\nI don't. I'm fairly stubborn, but I wouldn't try to fight this force. I'd rather use it. So I've thought a lot about where to live.\n\nI'd always imagined Berkeley would be the ideal place \u2014 that it would basically be Cambridge with good weather. But when I finally tried living there a couple years ago, it turned out not to be. The message Berkeley sends is: you should live better. Life in Berkeley is very civilized. It's probably the place in America where someone from Northern Europe would feel most at home. But it's not humming with ambition.\n\nIn retrospect it shouldn't have been surprising that a place so pleasant would attract people interested above all in quality of life. Cambridge with good weather, it turns out, is not Cambridge. The people you find in Cambridge are not there by accident. You have to make sacrifices to live there. It's expensive and somewhat grubby, and the weather's often bad. So the kind of people you find in Cambridge are the kind of people who want to live where the smartest people are, even if that means living in an expensive, grubby place with bad weather.\n\nAs of this writing, Cambridge seems to be the intellectual capital of the world. I realize that seems a preposterous claim. What makes it true is that it's more preposterous to claim about anywhere else. American universities currently seem to be the best, judging from the flow of ambitious students. And what US city has a stronger claim? New York? A fair number of smart people, but diluted by a much larger number of neanderthals in suits. The Bay Area has a lot of smart people too, but again, diluted; there are two great universities, but they're far apart."} |
|
{"context_length":4096,"context_depth":0.0,"secret_code":"n\/a","copy":0,"context":"All the best [hackers](gba.html) I know are gradually switching to Macs. My friend Robert said his whole research group at MIT recently bought themselves Powerbooks. These guys are not the graphic designers and grandmas who were buying Macs at Apple's low point in the mid 1990s. They're about as hardcore OS hackers as you can get.\n\nThe reason, of course, is OS X. Powerbooks are beautifully designed and run FreeBSD. What more do you need to know?\n\nI got a Powerbook at the end of last year. When my IBM Thinkpad's hard disk died soon after, it became my only laptop. And when my friend Trevor showed up at my house recently, he was carrying a Powerbook [identical](tlbmac.html) to mine.\n\nFor most of us, it's not a switch to Apple, but a return. Hard as this was to believe in the mid 90s, the Mac was in its time the canonical hacker's computer.\n\nIn the fall of 1983, the professor in one of my college CS classes got up and announced, like a prophet, that there would soon be a computer with half a MIPS of processing power that would fit under an airline seat and cost so little that we could save enough to buy one from a summer job. The whole room gasped. And when the Mac appeared, it was even better than we'd hoped. It was small and powerful and cheap, as promised. But it was also something we'd never considered a computer could be: fabulously well [designed](taste.html).\n\nI had to have one. And I wasn't alone. In the mid to late 1980s, all the hackers I knew were either writing software for the Mac, or wanted to. Every futon sofa in Cambridge seemed to have the same fat white book lying open on it. If you turned it over, it said \"Inside Macintosh.\"\n\nThen came Linux and FreeBSD, and hackers, who follow the most powerful OS wherever it leads, found themselves switching to Intel boxes. If you cared about design, you could buy a Thinkpad, which was at least not actively repellent, if you could get the Intel and Microsoft [stickers](designedforwindows.html) off the front. \\[1\\]\n\nWith OS X, the hackers are back. When I walked into the Apple store in Cambridge, it was like coming home. Much was changed, but there was still that Apple coolness in the air, that feeling that the show was being run by someone who really cared, instead of random corporate deal-makers.\n\nSo what, the business world may say. Who cares if hackers like Apple again? How big is the hacker market, after all?\n\nQuite small, but important out of proportion to its size. When it comes to computers, what hackers are doing now, everyone will be doing in ten years. Almost all technology, from Unix to bitmapped displays to the Web, became popular first within CS departments and research labs, and gradually spread to the rest of the world.\n\nI remember telling my father back in 1986 that there was a new kind of computer called a Sun that was a serious Unix machine, but so small and cheap that you could have one of your own to sit in front of, instead of sitting in front of a VT100 connected to a single central Vax. Maybe, I suggested, he should buy some stock in this company. I think he really wishes he'd listened.\n\nIn 1994 my friend Koling wanted to talk to his girlfriend in Taiwan, and to save long-distance bills he wrote some software that would convert sound to data packets that could be sent over the Internet. We weren't sure at the time whether this was a proper use of the Internet, which was still then a quasi-government entity. What he was doing is now called VoIP, and it is a huge and rapidly growing business.\n\nIf you want to know what ordinary people will be doing with computers in ten years, just walk around the CS department at a good university. Whatever they're doing, you'll be doing.\n\nIn the matter of \"platforms\" this tendency is even more pronounced, because novel software originates with [great hackers](gh.html), and they tend to write it first for whatever computer they personally use. And software sells hardware."} |
|
{"context_length":4096,"context_depth":0.0,"secret_code":"n\/a","copy":0,"context":"I was thinking recently how inconvenient it was not to have a general term for iPhones, iPads, and the corresponding things running Android. The closest to a general term seems to be \"mobile devices,\" but that (a) applies to any mobile phone, and (b) doesn't really capture what's distinctive about the iPad.\n\nAfter a few seconds it struck me that what we'll end up calling these things is tablets. The only reason we even consider calling them \"mobile devices\" is that the iPhone preceded the iPad. If the iPad had come first, we wouldn't think of the iPhone as a phone; we'd think of it as a tablet small enough to hold up to your ear.\n\nThe iPhone isn't so much a phone as a replacement for a phone. That's an important distinction, because it's an early instance of what will become a common pattern. Many if not most of the special-purpose objects around us are going to be replaced by apps running on tablets.\n\nThis is already clear in cases like GPSes, music players, and cameras. But I think it will surprise people how many things are going to get replaced. We funded one startup that's [replacing keys](http:\/\/lockitron.com\/). The fact that you can change font sizes easily means the iPad effectively replaces reading glasses. I wouldn't be surprised if by playing some clever tricks with the accelerometer you could even replace the bathroom scale.\n\nThe advantages of doing things in software on a single device are so great that everything that can get turned into software will. So for the next couple years, a good [recipe for startups](http:\/\/ycombinator.com\/rfs8.html) will be to look around you for things that people haven't realized yet can be made unnecessary by a tablet app.\n\nIn 1938 Buckminster Fuller coined the term [ephemeralization](http:\/\/en.wikipedia.org\/wiki\/Ephemeralization) to describe the increasing tendency of physical machinery to be replaced by what we would now call software. The reason tablets are going to take over the world is not (just) that Steve Jobs and Co are industrial design wizards, but because they have this force behind them. The iPhone and the iPad have effectively drilled a hole that will allow ephemeralization to flow into a lot of new areas. No one who has studied the history of technology would want to underestimate the power of that force.\n\nI worry about the power Apple could have with this force behind them. I don't want to see another era of client monoculture like the Microsoft one in the 80s and 90s. But if ephemeralization is one of the main forces driving the spread of tablets, that suggests a way to compete with Apple: be a better platform for it.\n\nIt has turned out to be a great thing that Apple tablets have accelerometers in them. Developers have used the accelerometer in ways Apple could never have imagined. That's the nature of platforms. The more versatile the tool, the less you can predict how people will use it. So tablet makers should be thinking: what else can we put in there? Not merely hardware, but software too. What else can we give developers access to? Give hackers an inch and they'll take you a mile.\n\n**Thanks** to Sam Altman, Paul Buchheit, Jessica Livingston, and Robert Morris for reading drafts of this.Thirty years ago, one was supposed to work one's way up the corporate ladder. That's less the rule now. Our generation wants to get paid up front. Instead of developing a product for some big company in the expectation of getting job security in return, we develop the product ourselves, in a startup, and sell it to the big company. At the very least we want options.\n\nAmong other things, this shift has created the appearance of a rapid increase in economic inequality. But really the two cases are not as different as they look in economic statistics.\n\nEconomic statistics are misleading because they ignore the value of safe jobs. An easy job from which one can't be fired is worth money; exchanging the two is one of the commonest forms of corruption. A sinecure is, in effect, an annuity."} |
|
{"context_length":4096,"context_depth":0.0,"secret_code":"n\/a","copy":0,"context":"The Segway hasn't delivered on its initial promise, to put it mildly. There are several reasons why, but one is that people don't want to be seen riding them. Someone riding a Segway looks like a dork.\n\nMy friend Trevor Blackwell built [his own Segway](http:\/\/tlb.org\/#scooter), which we called the Segwell. He also built a one-wheeled version, [the Eunicycle](http:\/\/tlb.org\/#eunicycle), which looks exactly like a regular unicycle till you realize the rider isn't pedaling. He has ridden them both to downtown Mountain View to get coffee. When he rides the Eunicycle, people smile at him. But when he rides the Segwell, they shout abuse from their cars: \"Too lazy to walk, ya fuckin homo?\"\n\nWhy do Segways provoke this reaction? The reason you look like a dork riding a Segway is that you look _smug_. You don't seem to be working hard enough.\n\nSomeone riding a motorcycle isn't working any harder. But because he's sitting astride it, he seems to be making an effort. When you're riding a Segway you're just standing there. And someone who's being whisked along while seeming to do no work \u2014 someone in a sedan chair, for example \u2014 can't help but look smug.\n\nTry this thought experiment and it becomes clear: imagine something that worked like the Segway, but that you rode with one foot in front of the other, like a skateboard. That wouldn't seem nearly as uncool.\n\nSo there may be a way to capture more of the market Segway hoped to reach: make a version that doesn't look so easy for the rider. It would also be helpful if the styling was in the tradition of skateboards or bicycles rather than medical devices.\n\nCuriously enough, what got Segway into this problem was that the company was itself a kind of Segway. It was too easy for them; they were too successful raising money. If they'd had to grow the company gradually, by iterating through several versions they sold to real users, they'd have learned pretty quickly that people looked stupid riding them. Instead they had enough to work in secret. They had focus groups aplenty, I'm sure, but they didn't have the people yelling insults out of cars. So they never realized they were zooming confidently down a blind alley.Silicon Valley proper is mostly suburban sprawl. At first glance it doesn't seem there's anything to see. It's not the sort of place that has conspicuous monuments. But if you look, there are subtle signs you're in a place that's different from other places.\n\n**1\\. [Stanford University](http:\/\/maps.google.com\/maps?q=stanford+university)**\n\nStanford is a strange place. Structurally it is to an ordinary university what suburbia is to a city. It's enormously spread out, and feels surprisingly empty much of the time. But notice the weather. It's probably perfect. And notice the beautiful mountains to the west. And though you can't see it, cosmopolitan San Francisco is 40 minutes to the north. That combination is much of the reason Silicon Valley grew up around this university and not some other one.\n\n**2\\. [University Ave](http:\/\/maps.google.com\/maps?q=university+and+ramona+palo+alto)**\n\nA surprising amount of the work of the Valley is done in the cafes on or just off University Ave in Palo Alto. If you visit on a weekday between 10 and 5, you'll often see founders pitching investors. In case you can't tell, the founders are the ones leaning forward eagerly, and the investors are the ones sitting back with slightly pained expressions.\n\n**3\\. [The Lucky Office](http:\/\/maps.google.com\/maps?q=165+university+ave+palo+alto)**\n\nThe office at 165 University Ave was Google's first. Then it was Paypal's. (Now it's [Wepay](http:\/\/wepay.com)'s.) The interesting thing about it is the location. It's a smart move to put a startup in a place with restaurants and people walking around instead of in an office park, because then the people who work there want to stay there, instead of fleeing as soon as conventional working hours end. They go out for dinner together, talk about ideas, and then come back and implement them."} |
|
|