Blog Image


Its time to design a new Instruction set architecture

Uncategorised Posted on Mon, February 08, 2021 17:49:20

I think its time to design a new hardware architecture that can eventually replace x86 as the dominant Instruction set architecture (ISA) for high performance computing. In this post i want to outline my reasoning for why this should happen and why it should happen now.

Historically, x86 has won out for three main reasons, Intel’s superior fabs, the scale of the x86 market, and Microsoft reluctance to support other instructions sets. When someone came up with something better (Like Alpha), the market size of x86 and the huge investments made in to it, would make sure that the advantage didn’t last long. Intel, even with a worse instruction set, could simply clock its CPUs so much faster making any instruction-per-cycle advantage irrelevant. This is no longer true.

A lot of attention have been given to Apples M1 architecture. Apple has an advantage using a newer ISA then x86. But the fact that a 30 year old architecture (Arm) has advantages over a 40 years old architecture, should neither surprise nor impress anyone. (It would however surprise me greatly if Apple makes the investments needed to make their architecture competitive on the high end, given how small they are in that market). Arm, while newer then x86, is essentially built with the same basic constraints: limited number of transistors. While Risc-V has gathered a lot of excitement, because of its open nature, its design mirrors old architectures in that it aims to be simple rather then fast.

So why is it time to design a new ISA right now? I think its time to redesign something when the constraints of the original design, are markedly different from the current constraints, and you can see that the new constrains will remain for the foreseeable future. Design decisions where made at the time, because of the limitations of the time. Today we are in a very different situation from when x86, Arm, and PowerPC was conceived:

-Single threaded performance has hit the ceiling. While computers are getting faster as a whole getting more cores and special hardware like GPUs, ML units, and video en/decoding, and so on, the vast majority of software is single threaded and runs on the CPU. many problems can’t be parallelized effectively. Even when software makes use of multiple cores or the GPU, a single thread acting as a job dispatcher can often be the bottleneck. This means that increasing single core performance would have an outsized impact on how fast the computer is in practice. A computer with half as many cores but with 50% more performance per core, would be a much more desirable in most cases even if it has a 25% lower theoretical performance.

-Most older designs where bound by transistor count, where as today we have so many transistors available, that spending more transistors on a core has diminishing returns. That’s why we instead go multi core. If we designed an ISA today we would do so with the assumption that we have a lot of transistors, and are likely to get more.

-Frequencies are not going up any longer mostly due to heat dissipation issues, so a design with better instruction-per-cycle would have a more permanent advantage.

-Memory access (especially latency) has become a limiting factor of real world performance. A design that has memory access designed from the ground up for a non-uniform-memory(NUMA) access models, with cashes, stacks in SRAM, more/different registers, memory synchronization and prefetching at its core, would enable many new innovations.

-A good ISA used to be one that was good for humans to write assembler to, but almost no one does that today. A good ISA today is one that a compiler can write better code for. What is clean and simple for a human to make use of is not the same as whats good for a computer to make use of.  

-A very large limiting factor is the CPUs ability to reason about out-of-order execution. Currently the ISA provides very little semantic information to aid in this. A new ISA and language constructs along the lines of “restrict” could aid both compiler and CPU designers reach higher performance.

-So much software and infrastructure we use today is opensource, therefor a new ISA would very quickly gain a working software stack. One could imagine a working GCC/LLVM compiler and a Linux port fairly quickly. Microsoft has also shown their willingness to support other ISA then x86, and their modern code base is designed for multiple ISAs.

-x86 has a lot of old stuff that currently is needed in order to be backwards compatible. (MMX!) Removing this would save transistors and “dark silicon”.

-Modern CPUs, have advanced branch prediction, pipelining, decoding and a lot of other hardware designed to turn the existing ISA in to something the CPU more effectively can use. The Itanium architecture, tried to move a lot of this logic in to the ISA. The problem with that is that the ISA works for only one specific hardware implementation. What we need is the opposite: an ISA that unleashes the creativity of chip designers, and gives them the tools they need to innovate further.

How would this happen?

I would prefer to see an organization set up and funded by the industry, mainly Intel, AMD and Microsoft. They would create a small group of independent engineers (preferably lead by a industry heavyweight like Jim Keller) who would go off and design the new ISA. Then each IHV could go off and make their own hardware implementation and compete in the market for the best product. The ISA would only be licensed for hardware implementation to the participating companies, for a few years so that investing companies could recoup their investment, and then made freely available.

Eskil Steenberg

How one word broke C

Uncategorised Posted on Mon, March 16, 2020 04:39:59

A lot have been written about the dangers of “Undefined behavior” in C. Its an often cited reason why C is a “Dangerous” language that invites hard to find bugs, and security issues. In my opinion Undefined behavior is not inherently bad. C is meant to be implementable on lots of different platforms and, to require all of them to behave in the exact same way would be impractical, it would limit hardware development and make C less future proof. Some of the concerns around undefined behavior in C are based on the fact that C is a small enough language that all corners of the language are explored and actually matter.
The problem with undefined behavior, is the definition of undefined behavior, or more precisely a single word in the definition of undefined behavior. Lets have a look at the definition of undefined behavior in the C89 Spec:

Undefined behavior — behavior, upon use of a nonportable or
   erroneous program construct, of erroneous data, or of
   indeterminately-valued objects, for which the Standard imposes no
   requirements.  Permissible undefined behavior ranges from ignoring the
   situation completely with unpredictable results, to behaving during
   translation or program execution in a documented manner characteristic
   of the environment (with or without the issuance of a diagnostic
   message), to terminating a translation or execution (with the issuance
   of a diagnostic message).

Sounds good. Now lets have a look at the definition of Undefined behavior in the C99 spec:

1   undefined behavior behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements
2   NOTE Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).

Notice any difference? Careful reading will reveal that the word “Permissible” has been exchanged to “Possible”. In my opinion this change has lead C to go in a very problematic direction. Lets unpack why its so problematic.

In C89, undefined behavior is interpreted as, “The C standard doesn’t have requirements for the behavior, so you must define what the behavior is in your implementation, and there are a few permissible options”. In C99 undefined behavior is interpreted as, “The C standard doesn’t have requirements for the behavior, so you can do what ever you want”. Everything after the word “Possible” becomes essentially meaningless. Its the difference between telling your kids they have to go to school, and telling them that going to school is an option.

C89 Gives the implementation plenty of reasonable options. from doing nothing, doing something platform specific that is also documented, or failing. For a long time this was enough, and as adoption of the C99 spec was slow (Among other reasons because C99 added variable array sizes that turned out to be a very bad idea and was made a optional feature in later versions) and this meant that the change didn’t matter for a long time since most compilers didn’t take advantage of the new definition. Over time however compiler engineers have employed more and more aggressive approaches to optimization and “the do what ever you want” was too good of an opportunity to pass up.

“What ever you want” is a very big possibility space. If, for instance, you have a large codebase like say the Linux Kernel and there is a single instance of undefined behavior somewhere in there the compiler is free to produce a binary that does what ever it wants. It doesn’t have to document what it does, it doesn’t have tell the user, it doesn’t need to do anything.

This change has led compiler writers to think that if the programmer even approaches anything undefined, they can do what ever, completely disregarding, if it makes logical sense, if it is predictable behavior or is in anyway useful to software development.
Lets have a look at this code:

char x
char y;

memset(&a, 0, sizeof(char) * 2);

The C specification says that there may be padding between members and that reading or writing to this memory is undefined behavior. So in theory this code could trigger undefined behavior if the platform has padding, and since padding is unknown, this constitutes undefined behavior.

So we get in to the weird situation where compilers say “I know X, but since the C standard doesn’t specify X, i can pretend that i don’t know X and behave as if it is unknowable.”. It gives the compiler license to optimize the code, without telling the user, in to this:
memset(&a, 0, sizeof(char));

If you are horrified by this, know that I’m being charitable towards the the compiler designer here. i might as well have written, that its perfectly reasonable for the compiler to produce a program that formats all your drives behind your back, because again, anything goes.

The problem here is that the compiler knows how much padding there is between the two members since it is not only conforming to the C standard, its also conforming to an ABI that very clearly needs to define padding between types. So the compiler states that it conforms to a ABI that clearly defines the padding between x and y while at the same time claiming that the user has no way of knowing what the padding is.

A compiler is by its very nature a translator that translate from one language in to another. It always have to conform to two standards, the one describing the input and the one describing the output. It makes sense for one of the two sides to say “translate me in to what ever works best for the other side”. In this way the general concept of undefined behavior is valuable.

Lets take a look at signed integer overflow:

Different architectures can handle signed overflows differently depending on how the sign bit is stored. If the behavior was defined by the C spec it would have made it incredibly difficult and slow to implement on hardware platforms that didn’t handle overflow the same way as the spec. By keeping it undefined, the C standard gives hardware vendors more flexibility to innovate. So far so good.

Saying that something is undefined to the C spec is not the same as saying that its unknowable. If I use my compiler to compile a program on my machine, the compiler knows that I’m compiling it to the x64 instruction set, so while the C standard doesn’t define what happens when a unsigned integer overflows the x64 specification certainly does. Consider this code:

void func(unsigned short a, unsigned short b)
unsigned int x;
x = a * b;
if(x > 0x80000000)
printf("%u is more then %u\n", x, 0x80000000);
printf("%u is less or equal then %u\n", x, 0x80000000);

if we run this code, (on a platform where short is 16 bits and int is 32bits):

func(65535, 65535);

We get:

4294836115 is less or equal then 2147483648

This looks crazy! Why does this happen? You would think since there are no signed variables in the code, overflow would be defined, and you wouldn’t have problems, but no. What happens is that C allows for the promotion of types, to other types that can fit the entire range of original type.


x = a * b;


x = (unsigned int)((int)a * (int)b);

Since the product of a and b is an signed int, the compiler deduces that the result cant be more then MAX_INT, and this carries over after the cast to a unsigned int because wraping is not a factor. Therefore x can never be more then 0x7FFFFFFF and therefore the if statement can be optimized away at compile time. You can imagine that the vast majority of programmers would have trouble debugging this code and understanding why the code behaves like it does.

C compilers have taken the concept of undefined behavior even further by doing the mental acrobatics of thinking that “If undefined behavior happens, I can do want I want, So therefor I can assume that it will never happen”. Consider this code:

if(p == NULL)
*p = 0;

If the compiler thinks that writing to NULL, is undefined, it can therefore assume that since you are writing to p, p can’t be NULL. If p can’t be NULL, the entire if statement can be removed and after optimization the code looks like this:

*p = 0;

This is very obviously dangerous behavior. In fact Linus Thorvald has said that this behavior is so broken that the Linux Kernel has broken with the C99 standard, and now require that the kernel is built with the -fdelete-null-pointer-checks option enabled. The fact that the compiler can detect that it is likely that the value p can be written to even if it is NULL is great. But it should result in a warning, not a seen as a license to make stupid assumptions.

The point of a compiler is not to try to show off that who ever implemented it knows more loop holes in the C standard, then the user, but to help the programmer write a program that does that the programmer wants. If you are a compiler and think that the if statement above is surpurpefelous, or that the code allows you to write to a null pointer, THEN TELL THE PROGRAMMER! That’s information the programmer wants to have!

Its like if a company builds a dangerous product that cuts peoples fingers off, but instead of fixing it, they put a warning on page 57 in the manual. Yes, you might be following the letter of the law, but your product still sucks for people fond of their fingers.

The thing is that while it is desirable to write code that is portable and have the same behavior, on any platform that C can be implemented on, it is also very use full to write C code that takes advantage of a specific platform. Portability is not the only goal a programmer can have. Making assumptions about your hardware is increasingly useful. Reality is that we do know a lot more about hardware architecture now then we did when C was invented. If you write code that assumes that int is 32 bits, that struct members are padded to the even size of the members, that int overflows to MIN_INT, you are going to be hard pressed to find a platform in wide use where this isn’t true. I’m even willing to bet that its going to look the same for decades to come. (Padding may change given that memory access is the main bottleneck, so packing things closer together to avoid cache misses may be a win over the cost of unpacking missaligned data) Can I see us using 128pointers in the future? Yes but even if Mores law keeps going that’s close to a century out. Worrying that your code wont do the right thing on a platform where a byte has nine bits, is insanity, even if the C standard permits such a platform to implement C.

Besides, the vast majority of C programs aren’t portable because of dependencies, not because of assumptions about underlying  architecture.

My feeling is that if this continues, we will eventually end up with a forked version of C, that caters more to engineers who want predictable results in practical applications, then to compiler engineers and academics who want to imagine theoretical architectures. In some ways this has already happened with the Linux kernel. Until this happens ill probably stick with c89.

The Startup party is over

EverythingElse Posted on Sun, June 02, 2019 18:34:29

If I told you I was going to start a company with a friend, build the product in a few weekends, put it out, get hockey stick exponential growth and within two years it will be valued at more then one billion dollars. Is that realistic? At any time in history you would have said i was insane, with one exception.

The period of say 2002 to 2014, was a unique time, brought on by broadband, a maturing web, free software, low-cost commodity hardware, and the advent of the smartphone. During this short time It was possible to create a billion dollar company out of your bedroom, and It enabled a herd of unicorns, that in many ways transformed the world. Spotify, AirBnB, WhatsApp, Twitch, Uber, Lyft, Twitter, Facebook, Github, Instagram, Snap…. We all know the names of the companies that where born during this period.

The problem is that people don’t get that the party is over. All the obvious apps/websites have already been built. The large companies are too established to be threatens by a weekend project. All the low hanging fruit has been picked off. YC keeps having bigger and bigger classes, but the successful companies we associate with YC success are a decade old, because its almost impossible to build anything meaningful in 10 weeks any more. Today I would argue that you should expect it to take five years to build anything that matters.

Yet our culture still think its ten years ago. Everyone expects a release in 6 months, and 20% month over month growth at scale. We take as gospel lessons from successful people in podcasts, books, lectures and films who talks about how to start a business, in a world that no longer exists. We have become a generation of founders and VCs who have a fundamentally warped idea of what it takes to do something significant.

Whats worse is that people think this “Silicon valley approach” should take over the world. All the before mentioned unicorns are solving a fairly narrow band of problems around connecting people and services and extracting arbitrage from existing things. They are all based on software. You cant apply the same timelines and growth expectations on things like transportation, energy, aviation, housing, or blood tests. Again and again, I hear silicon valley companies big and small announce they will disrupt the physical world, only to quietly give up a few years later when it turns out that reality is hard and that the incumbents aren’t as stupid as they thought.

Silicon valley likes to tout companies like Tesla and SpaceX as examples of how silicon valley can-do attitude can change the world. But neither of them are good examples of what founders and VCs are looking for, in terms of growth, return on investment, and most of all timeline. They both exist because Elon put his own money in to them, and persisted for 15 years, something almost no one seems willing to do.

I want to change the world, but real problems are hard. There are huge opportunities, in transportation, energy, computing, healthcare, manufacturing, but none of them can be solved on the kind of timelines and expectations that the startup community have. So many companies I see have great ideas, but aren’t realistic and serious enough to realize them. I worry that we wont solve these solvable problems, because everyone just want to party like its 2009.

Supply side service

EverythingElse Posted on Mon, June 18, 2018 17:10:54

If you where starting a ice cream shop how would you like it to be
like? I’m sure you want lots of flavors, you want really nice seating to
enjoy the ice cream, maybe organic locally sourced ingredients, what
about free samples? You probably want this because you like me are the
occasional customer of a ice-cream shops. Once you become a proprietor
of a Ice-cream shop however, its easy to see how its cheaper to have
fewer flavors, low quality ingredients, no free samples, and why should
you pay rent for space for people to just sit around? If you run an
ice-cream shop for long enough you will entirely forget what it is like
to be a customer, and your entire world view will be based on how hard
it is to run a ice-cream shop. This is when your ice-cream shop starts
to suck. This is why most big brands and chains suck.

call this going from “Demand side service” to “Supply side service” and
its everywhere. Products and services go from being great to being easy
to make and deliver. The free market and competition is suppose to
deliver ever better products, but we are instead its delivering greater
profits at the expense of worse quality.

I see this
everywhere. The post office complaining that its too hard to deliver
mail, Customer service where you get automated messages telling you to
look it up online, staff no one took time to train, fast food meat that isn’t really meat. I work in
tech, so that’s that I notice supply side service the most. Every
product is made monetize my data, advertise to me, get me to subscribe.
Buy a smart TV and you have to spend an hour setting it up, and none of
the steps make the experience better for you, its all about making it
better for them. Don’t get me started on the Supply side service of
removing ports… Every interaction with a tech product has become a
game of “spot the devious monetization they are trying to hook you in

Go look up “art nouveau architecture“. We will never
make buildings like that again. Think of that. With all our tools and
technology, advances in design and economic growth, we will never do
something as good as we did 100 years ago. Don’t tell me no one can
afford buildings like that, people pay $10million for an apartment and
still get supply side service. The reason we don’t make great things,
isn’t because there aren’t any rich people. Making crap is how we make
rich people. No wonder the owners of Walmart are the richest people in the world. During last years holiday shopping season I happened to be
looking in the art book for Dishonored2 and I realized that nothing I
could find in the fanciest department store, held a candle to the props
in that book. I usually don’t want a lot of stuff, I used to think its
because I’m not materialistic, now I’m wondering if its because no one
makes anything good enough to be wanted.

I am a demand
side supplier, I want the things I make to be great. The good thing about when companies start delivering Supply side service, that’s when they start taking their customers for granted and can be out competed. My hero’s like John
Carmack, Linux Torvalds, Elon musk, Steve jobs and Kelly Johnson,
pushed the envelope because their goals have been to make something
great, not to make something cheaply and just good enough to sell. They
will never be understood by business people because they don’t care that
much how hard it is to make, their entire philosophy is, if we make it
good enough, people will want it enough that the economics will work
out. They are not looking for the path of least resistance they are
looking for the greatest result. It takes a lot of focus to always
imagine yourself as the customer, most aren’t even trying.

A Culture of Conspiracy

EverythingElse Posted on Fri, June 08, 2018 20:03:35

I like to think of myself as having a fairly evidence based world view. I
aspire to judge the world around me like a scientist. As such I require
evidence, references, and peer review. I tend to be very skeptical
about conspiracy theories. About a year ago I read Christopher Steele’s
now infamous Russia dossier, claiming Russia’s FSB helped get Donald
Trump elected. What was just as shocking as its content was that I
instantly bought it.

The fact that I have believed in this has
bothered me a fair bit, so I have been trying to figure out why I
believe in it. I will in this post try to explain why and how my world
view shapes how I gauge the plausibility of a story. Lets be very clear
here, very little proof has been produced to substantiate the claims. If
proof emerges that disproves the content of the dossier, I wont dispute
it. Everything as far as I know could be made up.

So why would I
believe it? Clearly given that I don’t share Mr trumps politics I could
be biased simply because I want it to be true, but I don’t think that
is it. I don’t believe in common conspiracy theories about other
politicians I disagree with. I could say that it “smells” true, but that
might be the least scientific way to judge something. “Feeling” or
“wanting” the earth to be flat doesn’t make it so.

Lets start
with the obvious, conspiracies do exist. Watergate, Iran Contras, Enron,
the plot to kill Hitler in 1944 and Bernie Madoff are all historical
facts. They are however easily outnumbered by the conspiracy theories
that have no factual basis, so its prudent to be very skeptical. It
means that we need to think hard about what to believe in, especially
since conspiracy theories flourish so easily online.

important marker that tells me this is true is that it never goes over
board. Trump is offers lucrative contracts, but he turns them down. The
Russians have “Kompromat” on Trump but they never use it. If you make up
a Conspiracy theory you don’t cut out the juiciest bits. Almost all
conspiracy theories are out to discredit someone, and this one is way
too off the mark to be useful if it was made up.

My main hint
that this is real is the smell of office politics. This is the reason I
wanted to write this, and Its also something I think is very important
in order to understand the world in a broader sense.

If you have
ever worked with other people on a project you know that decisions and
actions are very rarely as well coordinated as thy should be. Different
people have different ideas and pull in different directions. Ideas are
approved or shut down because of from whom they emanate, what group they
belong to, their status, what group will benefit or get credit.
Conspiracy theorists often ascribe super human coordination to the group
of people who are executing the conspiracy. When have a group of people
ever been perfectly in line and synchronized? It doesn’t happen. If a
Conspiracy hinges on people working perfectly together, then its
probably not true.

When reading dating profiles, CVs and other
self descriptions, I have learned to not read what is written but read
the person who chose what to write. You can lie and say you are 6 feet
tall, when you are really just 4.8, but you cant escape the fact that
you though stating your height was a good idea. That is inescapable, and
tells me something deeper about you.

The Steele report sounds
like the kind of venting you would hear in a bar from a friend talking
about how messed up things are at work. I think a fair bit of the
content in the dossier is not very accurate. You could disprove a
specific thing in it, but it may still give the over all conspiracy
weight. If your friend at the bar tells you management did something
stupid today, are they lying? Probably not, but they are probably also
not privy to all information and they are only telling you their side of
the story. If the boss was there telling you about their reasoning for
the decision it might not sound as bad. This is hear say, not facts.
Your friend in the bar, may not have all the facts, and may have some
things wrong, but they are still probably capturing the culture and
issues at their job pretty accurately.

Hunter S Thompson’s
reporting was once called “The least factual, but most accurate
account”. I can tell you a story about a someone that is not true but
that still accurately reflects a persons personality and motivations.
These are assertions about culture.

We often attribute too much
intelligence to conspirators. Some how I’m supposed to be convinced
George W Bush was the master mind behind 9/11 but he couldn’t pronounce
“nuclear”? Conspiracies or any kind of illegal behavior emerges from an
environment where it is accepted. This is where culture comes in. If you
have spent a decade thinking about how to invade Iraq, of course you
are going to try to use 9/11 to that end. Bernie Madoff didn’t start out
as a fraud, but once you are in a culture of success its easy to start
hiding losses to retain that culture. As the losses grow you go further
and further and soon you you are doing things you once couldn’t imagine
yourself doing. People don’t ask questions because they want to believe
what they are told. People don’t lie to create conflict, they lie to
avoid conflict.

Its what Nick Davies describes as “the Conspiracy
of power recognizing power” in his excellent book “hacked off”. We
envision long tables where powerful men meet in secret to decide the
fate of the world, but in reality there is no need to meet. Most
powerful people know without asking what actions will be supported or
opposed by other powerful people. If you plan to propose a tax cut for
the rich, you don’t have to ask rich people if they will support you in
the next election. If you plan to invade one of the oil riches countries
in the world, you don’t need to ask oil companies if they are onboard.
Facebook told their employees to pay anyone who could make content to
generate engagement, and before they know it an army of people are
trying to write the most shocking headline about Hillary they can,
because that’s where a culture of
anything-goes-as-long-as-it-generates-clicks eventually takes you.

have a saying I keep repeating about foreign policy and it goes: “All
foreign policy is really domestic policy”. If you want to understand a
country’s foreign policy, you must understand, that it doesn’t have a
foreign policy, it has an amalgamation of policies driven by different
people, sometimes pulling in the same direction and sometimes not. Each
individual person, has their own objective, like appealing to a specific
electorate, sucking up to the boss, outmaneuvering the boss, helping
friends, keeping enemies down or trading favors. Some are ideological,
some are not. Culture is important because it is the thing that can make
the majority pull in the same direction. The same goes for
understanding Companies, Parties, or any other organization.

content of the steely report is the product of a Russian culture of an
over zealous security service. Putin is an old KGB man so in he has
created a culture where everything can be solved the KGB way. They
didn’t plan any of this, they video tape lots of people who stays at
their fancy hotels, why not? tape is cheep. It just turned out that one
of them decided to run for president of the US. Russian hackers probably
try to steel e-mails from everyone, and once the DNC emails landed on
their desks, why not leak them to Wikileaks? Russia has attempted to
discredit democracies for years, it was just their luck that they found a
candidate that slotted neatly in to this narrative. On the other side
we have a campaign with a culture of anything goes as long as it pleases
Trump. This is not a story of a grand plot, its a story of people who
where so busy wining one race, they forgot that there was other things
they could loose at in the process.

And this is the point. When the
campaign says there was no conspiracy, I think many of them believe what
they say. I don’t think they recognize that what they did was a
conspiracy. There was no secret meeting between Putin and Trump in a
hill top castle where they signed a fellowship in blood and used table
sized maps to carve up the world between them. When Trump JR shares his
email conversations, he thinks that they prove that all they did was
meet some Russians to get dirt on Hillery, not conspiring with a foreign
power. What he is not recognizing is that the this is what a real
conspiracy looks like.

Are we ready for AI we dont understand?

EverythingElse Posted on Thu, May 24, 2018 14:05:43

Some day in the future a little girl in a red dress, runs out in to the road and gets hit by a self driving car. A few weeks later a young boy with a red jacket on the other side of the planet gets hit by another self driving car from the same maker. A few months go by and the pattern is clear, the AI for some reason doesn’t understand that kids dressed in red is something to not drive in to.

If this was a faulty break pedal, airbag or ignition switch, the problem could be found, fixed, and cars could be recalled so that the issue could be addressed. As costly as this might be, the punitive damages a car maker could face if they were to knowingly ignore a faulty car that would hurt or kill people would be far greater.

However, with Neural networks and machine learning, the AI driving the car was in large part not designed by an engineer, it was trained using millions and millions of miles of traffic data recorded by cars with cameras and other sensors. The Neural network looks at this data and tries to find patterns in traffic and the responses expected by the driver.

The problem here is that if something goes wrong and we have accidentally thought the machine that its OK to hit kids if they wear red cloths, its very hard to figure out what in the millions of miles of data made it think it was OK. There is no line in the code that can easily be fixed that says:

if(kid && color != red)

This causes a huge liability problem. If you go in front of a judge and say that there is no real way to know why the AI drove in to the child, and that its not something that can easily be fixed, No matter how good the overall safety record is, the judge will order all cars off the road until the company can guarantee that it wont happen again. With Machine Learning you cant really make that guarantee. Saying “If we keep training it will probably get better at not hitting kids” wont really cut it in a legal or PR context.

We are going from a paradigm where we understand the code, but the code doesn’t understand the world, to paradigm where the code understands the world but we don’t understand the code.

Our legal system is based on the idea that we are each responsible for what we do and that we know what we are doing. Its almost impossible to guarantee anything that comes out of a machine learning algorithm no matter how high it success rate is. In our society we demand that when things go wrong we can find the issue, have it fixed, so that it doesn’t go wrong again. We allow for mistakes, but there is a reason why we don’t allow for repeated mistakes.

If I was in the legal department of any company basing their tech on machine learning I would be very worried about this. What kind of promises can we make, and how responsive can we be when something is wrong with a product no one really understands in depth? What happens when your translation system is sexist, or your camera system cant see black people?

A great feature of technology is if we can understand it. If we understand its capabilities and limitations we can trust it, to do somethings but also know what it cant be trusted with. A steering wheel is understood, we know when to blame its maker and we know when to blame its user.

How to defeat Internet giants

EverythingElse Posted on Mon, April 23, 2018 20:25:34

I have been thinking for years about my
project Unravel on how to redesign some fundamentals of the Internet. To
my delight, I now see a lot of other projects that in some way of
another are trying to take on how the Internet works, and the large
companies that have come to control it. There seems to be broad agreement
that something should be done, but there is not so much agreement on
what should be done. In my mind that is a good thing. The more things
that are attempted the better the chances are that someone succeeds.
Taking on some of the richest companies in the world wont however be
easy, and will take a lot of work, so I thought I would share some
advice to anyone else out there trying to change the way we use the

Don’t depend on economy of scale.

I hear so many
pitches that includes: “Once we get to scale we can…”. You don’t have
scale. Facebook has scale, Google has scale, Amazon has scale. If scale
will make your product great, then you have already lost because your
competitors have it and you don’t. Your platform needs to be useful even
if only one person, or maybe two uses it. Instagram didn’t start out as
a social network, it was a filter app. It was useful even if you were
the only one using it. Unravel is designed so that I will use it every
day, even if no one else does.

Nobody cares if you are nice.

see a lot of people attempting to create the “nice and friendly”
version of existing services. Nobody will ever trust you to be nice.
Google used to say “don’t be evil”, but they don’t anymore. Not because
they decided to be evil, but because there was no way of defining what
is evil. Everyone thinks they are making the world a better place so
that doesn’t make you special. If you run a service of some scale, you
are going to piss people off. There is content out there that some will
ask you to censor, and some people will be outraged if you do censor.
You cant win that one. All the big internet companies are trying to be nice, but they are failing because of the structures they have built.If your entire image is to always be nice, you
are just going to make it worse for yourself, when inevitably people
will start to question your actions. If possible have nothing to do with
what people do with your platform. Delegate to your users if possible.

Monetization wont save the Internet.

isn’t in a precarious state on the Internet, because there is no money
to be made from content on the Internet. There is plenty of money,
journalist just cant compete against click bait, cat videos, incendiary
opinions and fake news. The Internet has become what it is because the
incentives have asked people do do these things in order to make money.
If your platforms pitch is that it will enable people to make money (or
worse tokens,) it will attract the same people who ruined other
platforms, and they will work just as hard to game yours. My advice
would be to keep money out of it.

Use psychology, not rules.

to Wikipedia, look up a controversial political figure, and then go to
the discussion page. Then go to twitter, and search for the same
political figure. The former will be a mostly sober discussion about
wording, attribution and fact checking. The later is likely to be a
cesspool of insults and name calling. Any user who can sign up to
twitter can sign up to be a Wikipedia contributor. So why are they so
different? Could it be that on twitter the wildest punch line gets
retweets and likes, where as anything on Wikipedia, that isn’t balanced
and references gets quietly deleted and rewritten? If you build a
platform you create the incentives, and the right incentives will beat
any ban hammer.

Find your competitors profit center, then build a future without it.

ever you build, your competitor can build too. They most likely have
more resources then you do. If you get traction they will copy you,
unless that is, if you make something they would never do. Almost all
massive corporations that has fallen, has fallen because they refused to
embrace the technology that threaten their profit center. The music
industry didn’t embrace the Internet to protect the CD. Apple didn’t
take on Microsoft to protect their hardware sales in the 80s. Xerox
didn’t want to get in to computers to protect their photo copying
business. SGI didn’t want to compete against nVidia because they made
too much money from their expensive workstations. If you want to slay a
dragon, figure out hat they would never do, then do that.

Or as
Keyser Soze put it: “To be in power, you didn’t need guns or money or
even numbers. You just needed the will to do what the other guy

10 years of LOVE

Love Posted on Thu, December 01, 2016 08:09:07

Today is the 10th anniversary of the development of my game LOVE, and I think it’s time to tell the story behind it.

was working in academia and as much as I love science, I was getting
tired of not doing something real. When you do research about something
like video games or video game production, you never really know if the
solutions you create would work in the real world. I was considering
doing something completely different, but then i realized that it would
be a waste to not use my skills, and in the end i really love making
games. One late night, after coming home from a conference, I started a
new visual studio project called project love. I worked on it all night.
The name stuck and so did the game.

I was way in over my head,
but I liked it. I decided to do everything myself, engine, networking,
graphics, sound, physics, gameplay and procedural generation. It may be
the most ambitious game project anyone has ever attempted, but none of
that was really a problem. 3 years later I released an alpha.

was very excited, but there were some problems. I fixed them, and then
there were more problems. I kept fixing problems, but the game just
didn’t work. No players came, the server costs started to outstrip the
income. The press loved my game, until they played it. It wasn’t without
merit, it just didn’t come together. It turned out that I had vastly
underestimated the design challenges in the creation of the kind of game
I wanted to make. I was essentially trying to invent an entirely new
class of games.

At the same time someone else, with my resources,
in my city, made a very similar game: Minecraft. The difference was
that his was a game people wanted to play. When you work on a big game
there are many people you can blame if things go wrong. I had no one.
The fact that someone else did it proved that it wasn’t an impossible
task. I was just not good enough.

I thought I wanted to make a
commercial game, but at every turn where I had the opportunity to make
it commercial or design it the way I wanted, I chose the latter. Many
people have told me I needed to market the game better or make it easier
to learn, but to me this was always secondary. To me, the game simply
wasn’t good, and until that was fixed, why bother trying to attract
players? I spent almost 4 years trying to fix the game, and while
improvements were made, it never worked.

All of this was really
hard on me, and I got fairly depressed. After 7 years, I finally gave
up. Love was just associated with too much pain. I had wasted 7 years
and so much money. I didn’t want to be a game developer any more. When I
told people what I did, people would inevitably say “Oh, like
Minecraft? I love that game”.

At my lowest point I was at GDCE
and Robin Hunicke (who BTW is awesome) gave a talk about the hugely
successful game Journey that had just come out. She told the story of
the horrible development of that game, about the infighting and the pain
that it caused. I thought to myself: would I rather have had that
experience, having a terrible time making something successful, or do
what I did: have fun making something no one else cared about. That’s
when i realized that I had done the right thing. I followed my dream and
I enjoyed the process, more than the result. Minecraft fucked me up,
but not as much as the guy who made it. I got passed it, and I came out a
better person. He is no longer my nemesis, I feel for him.

last few years I kept a note file with ideas of how I would change Love,
but I was scared to go back. I worked on the pivot model to be able to
finally understand how games work. Last year, I decided to take a few
weeks off to fiddle with Love. Just to see if I could apply any of my
ideas and how it would feel, I was kind of surprised by how good it
felt. And I was even more surprised by the changes I made. For very
brief moments, Love started to sing.

I don’t know what it means
yet, and I don’t dare think I have cracked it, but for the first time in
many years I’m excited about it. So yes, I guess this is my
announcement that I’m occasionally working on Love again (for followers
of my Twitch stream it hasn’t really been a secret). I was planning to
make a video showing off what I’m working on, but I don’t feel ready, so
I wont. Maybe I will some day. I don’t have a timeline or a release in
mind. This time I know I’m doing it for me.

My next project is
Unravel, and I can’t even imagine it being successful, but I know that
it will challenge and intrigue me for years to come. In the end I am a
scientist and an artist. I tried to not be but I am. I will always
rather boldly go where no one has gone before, than be one of the
popular kids. I’m not convinced I will ever make something that anyone
will ever will like and use, I will probably never be rich or famous.
But you know what? I’m going to live a really good life.

Next »