Things politicians should probably know about computers

In our modern world computers are everywhere. They’re in our phones, airplanes, bureaucracies, communication systems, and even in our microwave ovens; yet politicians seem to have very little understanding of how these machines work. Hopefully this (admittedly lengthy, though hopefully interesting) blog post will be able  to help.

But first, what is a computer?

Let me tell you a story.
Once, a very long time ago, there was a war. On the one side there was a dictator who wanted nothing short of the complete annihilation of his enemies. He was a man whose very name would someday become synonymous with genocide.

In his quest for world domination, the engineers who worked for him invented many new weapons. They had invented explosives that could be propelled to their destination on their own. These “rockets” as they were called were not particularly good at hitting their targets though. The rockets couldn’t think. They couldn’t understand where they were going. They were, after all, just machines.

On the other side of the war there were the allied forces. Those who knew exactly what this dictator wanted, and knew that he had to be stopped.

The communication lines used by the dictator used a special machine called “the enigma”. It was a powerful cryptography device. The only way to crack the code would be to crack it before the end of the day. If you managed to crack the encryption code after the end of the day then it wouldn’t matter because they changed the encryption key every day.

The world’s best minds simply couldn’t keep up with the enigma machine.
Machines can often do tasks faster than humans can, but this particular task required a lot of thought. Which naturally raises the question: can a machine be made to think?
A mathematician named Alan Turing believed that, yes, machines can indeed think. He believed that he invented a machine that could do exactly that. He had invented the Turing machine.

To create a Turing machine you take a tape that is infinitely long, and divide it into segments called “cells”. Each of these cells contains information. That information could be a number, or a letter, the location of another cell, or, importantly, an instruction. Next you would modify many of these cells to contain a series of instructions. These instructions would be read by the read/write head which would follow the instructions and modify the information on the tape based off of what those instructions tell it to do.

The modern computer, to put it plainly, is a machine that follows instructions. It’s a turing machine. The only difference between the turing machine and the modern computer is that modern computers have a finite amount of memory circuits instead of infinite tape, those circuits are divided up into bytes which are made up of ones and zeros, and the read/write head is replaced with a processor circuit that is made of logic gates.

The modern computer only understands certain commands. These commands include things like moving information around memory, doing basic math operations like addition, subtraction, multiplication, and division, comparing two piece of information on the tape, and jumping from one instruction to another based on whether or not those two pieces of information are the same (that can mean deciding whether or not to skip over many instructions, or jump back to an earlier instruction).

These Turing machines, as it turns out, can actually do any kind of thinking that’s possible. They can, in theory, be programmed to think exactly like a human (although we don’t yet know how to do that), or they can be programmed
to think in some other way. The only kind of thinking they can’t do is the kind of thinking that would solve a paradox since paradoxes, by definition, are unsolvable.

Most people don’t know how to troubleshoot computers because computers don’t SEEM like machines that blindly follow commands.
The commands that modern computers follow go all the way down to the level of controlling the electrical signals that go to the screen of the device you’re reading this with to tell it how to display this message.

So now let’s dive into computers and see what needs to be known about them to deal with modern computer-related issues.

1. Operating Systems.

In the early days of home computing the programs that computers ran would typically assume full control over the device. If you hooked up a printer to the computer then the program that’s currently running would talk directly to the printer. Since we live in a capitalist society, there’s more than one printer company out there.

This means that different printers would communicate with the program in different ways. This meant that one computer program might only be compatible with a certain number of printers. This, of course, is a terrible way for things to work.

Thus one day kernels were invented. A kernel is a special computer program that talks to any hardware hooked up to the computer on behalf of the computer programs running on the machine. This way if a new printer comes out the kernel can be updated to work with that new printer, and now all the programs you can run on your computer will be compatible with that new printer. The kernel also enables the computer to run multiple programs at the same time.

The kernel, and all the programs that come with it, all come together to make what’s known as an “operating system”. Modern personal computers need operating systems
in order to work. Jake Roper from Vsauce3 made a great video on the subject here.

The problem:

Unfortunately most computers come with the Microsoft Windows operating system installed by default. It can be hard to find a computer that doesn’t have Windows already on it. At the current time of writing this isn’t too huge of a deal since Microsoft is currently giving away copies of Windows for free but, as I’ve talked about in a previous blog post, software that’s given away for free can be, and often is, a scam.

Microsoft’s entire business strategy is vendor lock-in. That’s why they have so much money, and why they had so much money in the past. There are alternatives to Windows such as Linux, but most people don’t know how to replace their operating system. This problem has gotten better in recent years since Google started selling their own laptops (chrome books) with their own operating system (chrome OS), but still it would be better
if it were easier for all of us to move away from using Windows.

2. The Internet.

In order to understand the problems the Internet faces we need to know how it works, and what it is. The Internet is a giant computer network that connects almost every computer on Earth (and possibly some not on Earth) together.

Like any computer network it’s goal is to allow a computer program on one computer to talk to another computer program on another computer. Computer programs can actually talk to each other and, in fact, that’s what they statistically spend most of their time doing. There are many languages that computer programs use to communicate with each other. A language that a program uses to talk to another program is called a protocol.

One such protocol that computers use is called the internet protocol or IP protocol for short. When one program wants to connect to another computer over the Internet it needs to know what program it wants to connect to, and what computer it wants to connect to.
Every computer in the world has an associated IP address which is what computer programs use to specify what computer they want to connect to. The IP address is just a series
of four numbers.

The problem:

The Internet is running out of IP addresses. There can only be up to 4.2 billion of them, and we actually ran out a really long time ago. Thankfully there’s a replacement called IPv6 that will give us more addresses than we could ever need, but to implement it we’d need to change a lot of infrastructure, and nobody wants to do that. There is a short-term solution that the Internet has been using called NAT (network address translation), but that can’t hold forever.

3. The client/server model.

When a program wants to connect to another program, one of them needs to be the client and one of them needs to be the server. On the Internet the way that works is one program tells it’s kernel that it wants to be a server program, and then the kernel allows it to accept connections from clients over the Internet.

Then the client will specify to it’s kernel what IP address it wants to connect to, and what program it wants to connect to and then the kernel will send connection requests (which are in the IP protocol language) over the Internet and then the server program on the other end of the Internet will decide whether or not to accept the connection or reject it. If it accepts then they can send messages to each other. If it rejects then the client will get an error message and have to decide what to do.

The problem:

There isn’t really a problem with this except for the fact that the internet service provider company can detect what computers your machine is connecting to by simply looking at the IP address in each internet protocol connection that gets made through it. One thing that companies can do is artificially slow down the connection to certain sites and offer to speed the connection back up to normal speeds in exchange for more money. This is where net neutrality comes in.

The idea of net neutrality is that internet service providers may not artificially slow down anything.

4. Domain Names.

In the very earliest days of the Internet if you wanted to connect to another computer you needed it’s IP address. When the Internet went mainstream people wanted
a better system for connecting to computers so domain names were invented. Here’s how it works: when a program wants to connect to another computer, it doesn’t just use the IP address. Instead it asks one of a few special computers called the “domain name servers” what the IP address for the domain name is.

So if I want to connect to google.com, I would tell the program to connect to google.com, and it would ask the domain name servers what the IP address is for it, and then it would have the IP address and would go about connecting like normal.

In order to get a domain name you need to pay for it. It’s kind of like a phone book except that it’s a phone book that you need to pay to put your name in, your phone number can change automatically, and you have to call a certain known phone number in order to ask it what the phone number is for a given person.

The problem:

Originally the idea was that there would be more than just eight of them. The idea was that every country could set up their own DNS servers.
The more DNS servers there are, the better because if those eight servers went down all at the same time then the Internet would stop working almost entirely.

5. Search Engines.

When the Internet was invented it originally was much more difficult for the average person to use. Originally they had to write computer programs to communicate with each other over the Internet.
There were several protocols people experimented with and each had their own associated client and server programs, but one protocol would one day become synonymous with the Internet in the minds of ordinary people:
the web. The web is not the same thing as the Internet. The web is something built on top of the Internet. It’s a protocol (called “hyper text transfer protocol” or “http”) that’s used by the client (called a “web browser”)
to access documents from the server (the documents are called “web pages” and the servers are called “websites”).

These web pages often contained references to other web pages (called “hyperlinks” or “links” for short). This way all the web pages on the Internet were connected in a giant web of connections. To navigate this web special websites called “search engines” came about which enabled people to simply type in a few words into a text input and then press a button and then a web page would be automatically generated (Yes, GENERATED. The web page wasn’t made by a human, a computer program on the website would create it on the spot), and then you’d get a list of links to web pages that contained the words you typed in.

This way you could quickly find information about whatever topic you wanted to even from sites that you had never heard of and might not have been able to stumble onto yourself. There was no more “stumbling onto” things. It could all be pulled from a search engine. Just ask it a question and you’d get the answer instantly.

The problem:

In order to make money off of having your own search engine you would generally use advertisements. The user would search for something and get a relevant advertisement for that thing as well as their search results. This isn’t a bad thing on it’s own, but what many find creepy is that while you’re learning things from the search engine, the search engine is learning things about you.

Many search engines, such as google, are constantly gathering data on everyone, feeding that into a supercomputer, and figuring out what kinds of advertisements might be relevant to you. Companies like google gather vast amounts of information on everyone. Having privacy on the Internet isn’t easy because of the sort of power for spying on people that the likes of google have. Speaking of which…

6. Anonymizers.

One of the things the Internet was supposed to be good at was anonymity. It works like this:
Once you’ve been banned off of a website for spreading the absolute TRUTH about the rampant corruption that the administrators of that site engage in (and I’m only half joking, some of these people really do accept bribes to keep
certain messages hidden), you can create another account and do it again.
Or can you? This led to many sites keeping track of the IP addresses that a banned user came from. So to get around this proxies were invented. The idea is simple: instead of directly connecting to the website,
your computer instead connects to another computer called a “proxy” which will then connect to the site for you and then when you get banned again you can just use a different proxy.
This also helped prevent the websites from collecting huge amounts of data on the users.

This method became so popular a way of ensuring anonymity that it was expanded upon with the TOR network. The tor network works like proxies, but it’s an entire distributed network.
The way it works is you connect to a computer called an “entry node” and then that connects to another computer, and that connects to another one, and so on until one of them connects to an “exit node”
which then connects to the website. This creates layers of encryption and makes it very hard to spy on people.

The problem:

Unfortunately people have found ways to fight this.  All a site needs to do is use the tor network to connect to itself again and again and then it knows which ip addresses are tor exit nodes, and can refuse to accept connections from those computers.

Also any proxy that an ordinary person can know about can probably also be known about by the website, and so they know other ip addresses to ignore. This problem is still being worked on, but to make a long story short there is no perfect solution to getting anonymity online.

Anonymity is important. Without it the people can be spied on by corporations and governments alike, and censorship can only happen when they know who is saying the messages that are being censored. While it is true that hackers generally use anonymizers to remain hidden, it’s important to understand that there’s more to hacking than just using anonymous connections, which brings us to:

7. Bugs.

In computer programming a bug is a mistake that a programmer makes when writing a computer program.

You may recall that computer programs are just lists of instructions for the machine to follow. Some programs are larger than others though. Some programs have many instructions, while others have very few. As a general rule the more features a program has, the more instructions it’s made up of. Some features take up more instructions than others, so it’s hard to say how many instructions a given feature will have.

The more instructions there are, the more places there are for bugs to exist. In theory it’s possible to write a program that has no bugs; however in practice it isn’t.
In my experience there are generally two types of bugs: the first kind are the ones that you’d never know existed because you’re a normal human being who’s NOT constantly trying to break things, and the second kind are the kind that ordinary users will notice because they show up when just using the program normally.

For example: I’ve heard that in the early days of Amazon you could order 0.1 of a product and get a 90% discount. I don’t know if that’s true or not, but that’s the perfect example of the first kind of bug. Nobody who uses Amazon normally would think to do that, while there’s a very specific type of person who would immediately think to try that.
A great example of the second kind of bug is how when you play a video on the youtube app for Android it will sometimes briefly show the first frame of the wrong video.

The problem:

Most proprietary software is full of bugs. While most freedom software has far less bugs but can often lack certain features that people need.

8. DRM.

“Piracy is almost never a price issue. It’s almost always a service issue.”
-Gabe Newell, CEO of Valve Software.

One of the problems with writing software is finding a way to make money off of it.
The most obvious method is to sell the programs as if they’re physical objects; however this brings with it a problem: what if the users just make copies of the programs
and give them to each other so that only one person would ever need to buy the program? This is where DRM (digital rights management) comes in. DRM is any kind of system that would attempt to stop piracy.

Unfortunately for the software companies there’s nothing that really can be done to stop copying from happening. No matter how you write the program, so long as the program is running on my computer,
I can just modify the program to not do any of the things that would prevent it from being copied and used by everyone.

The only defense that software companies had against this was to get the government to make it illegal to modify computer programs.
Software companies have tried many different approaches to implementing DRM, but the DRM systems tend to do very little to stop piracy, and tend to just annoy the customers who payed for the product.
In response to this many programmers added end user license agreements that explicitly allowed people to modify the programs, but that’s another story from another blog post.

The problem:

That DRM exists at all. It’s not real technology and it tends to just make life harder for the people who payed for the software.
There’s nothing that can be done to make it impossible to pirate a program. Except…

9. Streaming.

To prevent people from being able to copy programs software companies are now experimenting with the idea of having the program run on their own servers that they have full control over, and simply allowing their users to connect to these servers and remotely use the program. This method for prevent piracy is almost completely fool proof at stopping piracy from happening, and it only comes at the cost of customers never being able to preserve a copy of the program.

The problem:

Soon enough it may be the case that you could pay hundreds of dollars for the chance to “buy” a program which could stop working at any time. Back when people could have a copy of the actual program on their own computer they could just keep using it forever (unless it was programmed to slow down over time or just quit working one day, which is something that many companies do).

With this system, the instant that the program becomes no longer profitable the company can just shut down the server computers running it and then the product would stop working entirely.

This has huge implications for the user and those implications are all bad. Keeping copies of old programs is just as much a form of archival as keeping copies of old newspapers or old books. Some people put a lot of effort into keeping copies of old information so that society as a whole can remember it, and this threatens archival at a fundamenetal level.

10. Security.

Now that we’ve connected to the Internet we need to start thinking about security. Unfortunately Windows is not secure. At all.
Thankfully though, Windows does give us some nice examples of how NOT to do things. Security is a complex topic, but as I’m sure you know it generally involves attackers (or what might call a “hacker”) who tries to break
past the security system of something they’re hacking, so let’s begin there. Remember before when I mentioned that there’s a certain kind of person who’s always looking to break things? That’s what a hacker is.

There are a few different kinds of hackers: the first are black hat hackers. These are the kinds of hackers who hack into systems illegally. They generally do it using two kinds of attacks. The first kind of attack is where they exploit bugs, the second kind is where they overload the target with information that’s coming in too fast for the computer to process it. There’s a reason why programmers often care about fixing all bugs, including
ones that no ordinary person would ever come across. It’s to prevent black hat hackers from getting in.

Unfortunately most programmers aren’t as good at hacking as a hacker is, so that’s where the second kind of hacker comes in. White hat hackers are penetration testers (or “pen testers” for short) who break into computer systems so as to check for ways to get in.

Companies pay white hat hackers to test their systems for them, and some of them get payed a lot of money for this service. The way they do it is they try all kinds of things to try and find bugs in the program that can be exploited and then they report those bugs to the programmers. Of course, some people would rather keep those bugs a secret in case they want to hack into systems. Those are called “gray hat hackers”. They do a bit of both kind. Finally there are red hat hackers. They’re hackers who work for the government and hack other governments.

The problem:
Most companies that have servers such as twitter, youtube, facebook, etc tend to have security that’s good enough to keep amateurs out, but not good enough to keep a determined professional out.

Some companies care more about security than others, but what they all have in common is that they all only care as much as they’re forced to. Freedom software projects care a great deal about security, while proprietary
software tends to care not at all.

11. Encryption.
Encryption, to put it plainly, is the act of modifying information so that it only has meaning to it’s intended recipient. Obviously there’s a lot that can be explained about cryptography, but for now let’s focus on encrypted connections over the Internet.

When a computer forms an encrypted connection to another computer it sends it’s messages to that computer through intermediate computers. That’s just how the Internet works. Now if computer A (“Alice”) forms an encrypted connection with computer B (“Bob”) then the messages have to pass through intermediate machines. If one of those intermediate machines is trying to spy on Alice and Bob, then they could take the messages from Alice, pretend to be Bob, form an encrypted connection with Alice, and then form an encrypted connection with Bob.

This way the spying computer could take messages from Alice, pretending to be a simple messenger, decrypt those messages, read them, and then send them through it’s encrypted connection to Bob.

This is known as a man in the middle attack. To solve this problem digital signatures are used. A digital signature is a way to make sure that a message was sent by the person it was supposed to be sent by and that it couldn’t have been modified.
The way that works is your computer generates two files (called “Keys”). One is the public key, and the other is the private key.

The idea is that you give the public key away to anyone who will accept it, and keep the private key a secret sort of like how you’d keep a password a secret except that this password is too large to remember and so it has to be kept in a file.

The private key can be used to encrypt files in such a way that they can be decrypted by the public key, and ONLY the private key can encrypt files so that the public key can decrypt them. Of course the public key is meant to be given away to everyone so this isn’t good as a means of encryption, but it is good for digital signatures. If Alice sends a message to Bob, she wants to make sure that any man in the middle can’t modify it (that’s how man in the middle attacks work. They require the man in the middle to modify a message sent by Alice). So to do this she encrypts it with her private key.

The man in the middle receives this encrypted message and can decrypt it since this man has Alice’s public key just like everyone else; however he can’t modify it and reencrypt it.

To modify a message is to make a new message, and in order to encrypt this new message the man in the middle must have Alice’s private key, which he doesn’t have. The only option he has is to send it to Bob. Thus Alice and Bob are able to form an encrypted connection that the man in the middle can’t spy on.

This is exactly what websites do except with one small problem:
How does the website get it’s public key over to the web browser? The website can’t just digitally sign it and then send the digitally signed version because the only way to verify that the digital signature is legitimate is to already have the public key.

This is where trusted third parties come in. The idea is simple: any website that wants to form an encrypted connection pays a trusted third computer to digitally sign the site’s public key with
the trusted site’s public key, and the web browser happens to already have the public key of the trusted third party and can thus receive the public key for any site. This might seem complicated but it’s really not as complicated
as it sounds.

The problem:

This system relies on trusted third party companies whose whole job is to sign public keys for websites and NOT make it possible to spy on people’s connections.
Naturally this is not a very good system because third parties might be bribed into compromising their clients’ security. Nobody has been able to come up with a better system than this, unfortunately.

Another problem is that all forms of traditional cryptography could be broken by a new technology called “quantum computing”. Quantum computing makes it possible to break any kind of old encryption system, and makes it necessary to have quantum encryption systems. So far nobody has been able to make a quantum computer that can deal with enough information to be able to break the current encryption systems that we have, but that could change soon enough.

12. Cryptocurrencies.

Cryptocurrencies are currencies that are specifically designed to be decentralized. Decentralized means that there is no one central server that could take the entire system down if it went offline.

The way it works is that there’s a ledger that keeps track of the transactions that people make using the cryptocurrency. This ledger is distributed across the various mining systems. A mining system is just a system that keeps track of the ledgers, tries to solve a complicated cryptographic problem for that ledger, and in return it gets a samll amount of currency added to the system that it’s owner gets to keep as a reward for keeping things running.

The problem:
There’s no regulations in these economies so scams are rampant. Also the exchange rate fluctuates wildly for some reason.

13. Video Games.

In 1982 Disney released the movie TRON. TRON was a fictional story about what the inside of a computer was like. It was a nice little story in which programs were played by human actors wearing silly little glowing costumes, and they played video games against each other to the death like Roman gladiators. Of course what the digital world is actually like (as discussed at the beginning) is a bit different. What is the inside of a computer world like?

It’s whatever we tell it to be like. By teaching the computer how to display triangles in three dimensions it’s possible to create what look like actual 3D objects since all 3D shapes can be approximated as a bunch of triangles.

In these virtual worlds it’s possible to create things (called “objects”) that can do stuff. This has been used extensively to make video games. Each individual object can be given it’s own programming and players can interact with these objects using some kind of input to the computer.

That input can be a keyboard and mouse, or joysticks, or a motion controller, or whatever else we can use to communicate with a computer device. In the early days of computer games people would generally play a game by themselves. In first person shooter games (“first person” meaning it’s seen from a first person perspective, “shooter” meaning there’s a gun in the corner of the screen) the players would often fight some kind of enemies by shooting at them. The enemies were objects just like everything else in the game.

The objects would have instructions associated with them telling the computer where the enemy should be, what the enemy should be thinking, and, importantly, how to display the enemy on the screen.

Eventually people became very good at fighting enemies in video games and the enemy characters in these games didn’t actually get better at all since they weren’t programmed to be able to learn.

Programming enemies in video games isn’t easy, and they often struggle to keep up with the skills that some of the best gamers have. So to deal with this game developers released games that allow gamers to play against each other online.
This way the best of the best could have an actual challenge.

The problem:

There are many problems plaguing the world of video games. For one, there’s rampant serious abuse of the game developers from management. For another there’s the problem of gambling mechanics, micro-transactions, and pay to win. There’s also the problem of DRM.

Many games have tried offering items that you can pay real world money for that will either make your character look cool or (in some cases that nobody likes) give you weapons or items, or abilities that give you an advantage over other players (that’s called “pay to win”).

This isn’t entirely a bad thing though. Some games are “free to play” meaning you can download and play them for free and the developers get their money from selling virtual items.

Some games have started adding a particular type of virtual item called a “loot box”. The basic idea is this: after every round of an online game, the player might get a box that contains items they may want. They have to pay real world money to unlock the box and get a random collection of (mostly worthless) items. Only occasionally do they get something of value.

Needless to say, nobody likes this gambling mechanic, and many people have become addicted to it the same way that people become addicted to gambling. For more information, I would recommend this youtube channel.

14. Prosthetics.

We’ve seen so far just how widespread the use of computer chips is, and the invention of the Turing machine is arguably one of the most significant inventions ever.
These computer chips are used for basically everything in engineering precisely because they can, in theory, be programmed to do anything. So is it any surprise then that they would eventually be used for medicine?

Sometimes people lose limbs, sometimes people have organs that fail, sometimes people get bored of having an ordinary human body that can’t be upgraded and want something more. This is where cybernetics comes in.

Of course cybernetics is still a young developing field, but it’s come a long way from the days of peg legs, and hooks for hands to today where we have cybernetic arms that have fingers which can move in response to the movements of muscle groups in the arm.

While machines aren’t yet able to decipher the signals that the brain sends to body parts to control them, and they certainly aren’t completely ready to be installed into people’s brains so as to give them the ability to control machines with their thoughts, that technology is closer than most people think. Vsauce2 did a great video on cyborgs here.

The problem:

As I’m sure you’re well aware, healthcare is expensive, and the thought of getting a brain implant just because you’ve always wanted a computer chip in your brain is a ways off in a political climate that’s skeptical of the idea of giving amputees replacement limbs for cheap. Not to mention the small issue of making computers secure enough for being used like that. As mentioned earlier, software companies tend to only
care as much as they’re forced to.

15. AI.

The idea of a machine that blindly follows orders is exciting; however there is one small problem: the instructions it understands are too low level. We can’t give it a high level command like “Go make me a sandwich” because it can’t understand that unless we provide extremely detailed instructions on how to do that.

One thing we need is to give it the ability to see and recognize what bread looks like. The first part of that challenge has been thoroughly solved. We can hook up cameras to the computer that allow it to see things, not just in the visible spectrum of light, but even in other spectrums such as infrared and ultraviolet. To put it plainly: there are cameras out there that can see in colors that the human eye can’t see in.

The next problem is how to get the computer to understand what it’s looking at. When the computer receives an image from the camera that image is just a 2D grid of numbers. Somehow we need it to identify if the image is of a slice of bread, and where the bread is in the image. That would be a good starting point.

This has proven to be an incredibly difficult task. How is it that the human brain is capable of recognizing nearly any object and yet computers can’t? Somehow something is going on inside the human brain that enables it to do this. So that’s exactly where the research is being focused. Computers can simulate things.

They can keep track of anything whether those things are atoms in a chemical reaction, or the planets in outerspace, computers can simulate them all. In fact computers can even keep track of things that don’t exist. Like for example: the exact electrical signals going through a chunk of brain matter which doesn’t actually exist in the real world.

By simulating a chunk of brain cells we can teach the computer how to recognize bread, peanut butter, jelly, or any number of different objects. We can also use these artificial neural networks to recognize human speech, or certain sounds, and much more.

The only problem with modern artificial neural networks is that they learn very slowly. It takes thousands or even millions of images of bread, and images that don’t contain bread (which must be carefully labeled by humans) in order for the neural network to learn how to tell the difference between the two types of images. There’s active research going on
right now to try and make neural networks that can learn with fewer samples.

Of course AI can be used for more than recognizing objects and sounds, it can also be used to pick up on any pattern that can’t be easily described by computer code, such as finding trends in the stock market, or finding a complicated pattern in human language that enables it to translate from one language to another, or playing a game automatically, or anything that requires it to learn how to recognize patterns.

This ability enables us to take computers from being bricks that you carry around with you (like smartphones, or laptops), and give them the ability to move around on their own. Which brings us to:

The problem:

As you can image, a machine that can learn patterns can be used for many things that it shouldn’t be used for, but one that seems to want to rear it’s ugly head is crime prediction.
It’s important to bear in mind that these algorithms aren’t magic. They just find patterns in data and latch onto them regardless of whether or not those patterns are valid or just a coincidence.

“Predictive policing” isn’t a good idea because it’s just a more high tech way of having someone look through all the data and see if they can spot patterns.

Vsauce2 made a great video on this very topic that you can see here.

16. Robotics.

CGP Grey made a great video on what the future may look like when machines take over which you can see here I’d also recommend this link [insert link to wait but why AI article] on what the future will look like when an artificial intelligence is made that’s as smart as a human. To put it plainly robots are coming to take over all jobs.

Automation is inevitable, and that should be a good thing. I’ve often seen people surprised by the idea that robots can do all jobs including the jobs of designing, building, programming, and maintaining other robots, but to me it just seems like the natural conclusion to the engineering process.

I’m not trying to say that machines can be people. I’m saying that people ARE machines. People are machines according to machine theory. There is nothing that a human can do that a robot can’t do.

The problem:
Humans won’t be able to compete with robots for jobs. Robots are the ideal worker. They don’t demand basic human rights, they don’t demand a paycheck, and they can be left to run 24/7.

List Of Programming Project Ideas.

The best way to learn to program is to work on projects, so here’s some ideas for projects for beginners, intermediate, and advanced programmers. Remember: it doesn’t matter whether or not you finish these projects. What matters is what you learn along the way. If you have any ideas for things to add to this list you can leave a comment with it.

Pseudo code (good for warm ups):

  • Write a procedure for a made up assembly language that blinks a light on and off.
  • Write a procedure for a robot that has it replace a tire on a car.
  • Create a finite state machine diagram for a mobile robot that finds a red ball and returns it to the robot’s charging location.
  • Write a procedure for sorting a given list of numbers from least to greatest.
  • Write the pseudo code for a self-driving car that can go around the block.
  • Write a procedure for a computer chip that opens a window when it receives a radio signal from a button.
  • Do some research on how ants in an ant colony behave (basically how ant colonies work) and come up with the pseudo code for a robot ant that will work with other robot ants to try and act like an ant colony.

Assembly:

  • A program that exits with return status 0
  • A program that prints “hello, world”

C/rust:

  • A Linux kernel driver for a USB button.
  • A C standard library that has memory allocation using the buddy algorithm.
  • An arduino program that blinks an LED.
  • An arduino program which turns on a motor which closes or opens a window based on the temperature.
  • A program that uses loops to print out the lyrics to 99 bottles of beer on the wall.

Golang:

  • A library that parses xml concurrently.
  • A Simple web server that serves up one html page.
  • A web server that serves up all files in the current directory.
  • A concurrent pipeline that reads in MNIST samples and uses image magicke to turn them into files, and maybe apply some optional filters.
  • A concurrent system that runs dijkstra’s algorithm.
  • A web server that uses youtube-dl to download youtube videos automatically, save them, and play them back to whoever owns the server.
  • A program that uses concurrency, and image magicke to automagically fix the gamma settings for a huge number of image files in parallel.
  • A reddit scraper.
  • A program that takes in a list of urls from standard input and (in parallel) checks each of them to see if their servers are up, and the prints to standard output the names of any that are not responding.

Prolog:

  • A script that helps you pick out parts for a PC.
  • A simple package manager using an sqlite library.
  • A script that lets you access firefox web history and use readln to issue queries about it.
  • A script that acts as a pharmacist.
  • A script that checks the school schedules of a student for overlapping classes.
  • A script that extracts all links from an html page (note: you will need a library for this).

Haskell:

  • A program that takes an integer and tells you if the integer is prime using a parallelized brute force method.
  • A password cracker that uses the Control.Parallel.Strategies library.
  • A program that takes in an adjacency list as a CSV file and spits out an adjacency matrix.
  • A program that runs K-means image segmentation on it’s input using the accelerate library for GPU acceleration.
  • A neural network library that uses dependent types, and the accelerate library.
  • A program that can lazily generate all possible tweets.
  • A program that lazily generates and prints the fibonacci sequence (note: you will have to set stdout buffering to line buffering).

Bash:

  • An rsync wrapper that backs up your files using snapshot backups.
  • A program that finds duplicate files in a directory and makes them the same file using hard links so as to save space.
  • A script that grabs a random line from a given file.
  • A script that renames every file to include the date and time it was modified in the file name.
  • A script that uses a regular expression to check if a given input is a valid phone number.
  • Write a script that goes through a directory and all of it’s sub-directories and deletes all images.

Lex and Yacc:

  • An XML parser.
  • A C compiler.
  • A programming language that has all the features you wish other programming languages had.
  • A parser for the wavefront obj file format.

Erlang:

  • A simple web forum using the yaws program.
  • A gopher server.
  • A bank website.
  • A server that can serve up videos.
  • A Debian package server using this specification.
  • A mastodon web server.
  • A web server that says “Hello, world” via a web page (use yaws to make this easier).
  • A server that takes in lines from over a network connection, shuffles them, and then sends them to a different specified connection (note: you should assume that not all the lines being given to the server can fit in the memory of just one computer).

Python:

  • An XKCD comic downloader.
  • A script that uses AI to draw googly eyes on images.
  • A program that simulates a galton board, and prints out how often each pocket gets a ball in it.
  • A calculator program with a GUI using a GUI library.
  • A program that prints out the lyrics to 99 bottles of beer on the wall using loops.
  • A program with an interactive prompt that asks the user what it should do and makes function calls to various things it can do (like “remove [filename]” or “tell me the time” or something).
  • A program that asks for two numbers, and then calculates the length of the hypotenuse of a right triangle with those two numbers being the side lengths (hint: use the pythagorean theorem).
  • A program that uses objects to keep track of cars for a dealership.

Lisp:

  • A script that takes in a list of birthdays and the names of people associated with them, checks the date, and says happy birthday to anyone who’s birthday is on the current date.
  • A script that takes in a list of numbers and tells you how often each number shows up.
  • A program that takes in a list of numbers and returns the average, and standard deviation of those numbers.

Octave/Matlab:

  • A script that uses rotation, scaling, and translation matrices to trilaterate the position of a thing given it’s distances to four given points in 3D space.
  • A neural network script.

Pytorch:

  • A script that can read a text file and summarize it using transformers.

Projects where you’ll have to decide on your own what language to use:

  • A program that reads in a wavefront obj file and displays it in a window, and the user can rotate the model around to see it from different angles.
  • A twitter client that uses ncurses.
  • A quick script or program that can generate the sound of what hydrogen should theoretically sound like when excited.
  • A remote controlled differential drive robot (note: this will require some knowledge of electrical engineering).
  • A program that argues with the user (bonus points if it uses AI).
  • Add a new feature to an existing project on github.com
  • A series of programs that enables you to create a genetic breeding model for machine learning.
  • A program that reads an image from a file and blurs it using the image magick library.

 

How To Install Linux

In the last post I listed a series of websites for the different various Linux distros. If you’ve never used Linux before and want to use it then I’d recommend Linux Mint because it’s made for beginners. Unfortunately installing Linux is both easy and hard.

It’s easy if you have some technical skills and it’s almost effortless if you’ve installed Linux before, but it’s not possible to list out what buttons to press, because each computer model has a slightly different way of installing Linux.

There’s simply no way to write a single article detailing how to deal with every possible thing that might come up purely because there are too many different models of computers out there. The steps tend to be nearly identical for each one, but it will take some technical skills to do this.

Also it’s possible to screw things up in such a way that you can’t boot up your computer, but Linux Mint tries to make it harder to do that.

You may want to do this on an old laptop that doesn’t contain any files that you care about.

In order to install Linux you’ll need to download an ISO file from the website for whichever one you choose.

The next step is to download and install this free software program: https://rufus.ie/en_US/

Then plug in a USB stick into your computer. All the files will be wiped from this USB stick, so make sure it doesn’t contain anything important.

If you want to use Linux alongside Windows then you should also defragment your hard drive, and if you’re low on disk space then you may want to back up and/or delete any large unwanted files from your computer.

Next use Rufus to burn the ISO image to the USB stick.

Next reboot your computer with the USB stick still in. If you’re on a laptop then make sure that your computer is fully charged and plugged in. If the power goes out during the installation process then it might break Windows and/or Linux.

Now comes the hardest part. It used to be as simple as simple as rebooting with the USB stick in your computer for every computer, but computer companies changed the default settings to boot to Windows instead.

If your computer doesn’t automatically boot to Linux then reboot again, and press F12 and then select the USB stick to boot from. Some computers may require you to press F2 and go into some settings to tell it to boot from USB, others might have you press some other button.

Most computers will flash the logo of the company that made the computer at boot up and have a message saying which button to press to go into settings or which one to press to bring up a menu of options for things to boot from. You want it to boot from the USB drive, and not the internal hard drive.

Once you’re in these settings you can’t use the mouse. You’ll need to move around with only the keyboard. You might want to disable quick boot, and secure boot, and you might want to change the boot priority so that USB devices are booted to first.

Once you’ve booted up to Linux most distros (including Linux Mint) will have an interactive prompt that will guide you through how to install it.

If you want to keep Windows alongside Linux then that’s called “dual booting”, and you’ll need to tell Linux to do that at some point in the installation process. Many distros will by default erase Windows and install over it, so you’ll need to keep an eye open for when it asks you how to install it.

Once you’ve done that you should check out this free pdf book: http://linuxclass.heinz.cmu.edu/doc/tlcl.pdf which will teach you how to use it better.

What is Linux?

In the previous post I talked about vendor lock-in. Computers are taking over more and more of our daily lives, and this trend of computers controlling more of the world is not likely to come to a stop any time soon.

Soon enough cybernetics might start to become a reality. We may someday soon be able to upgrade ourselves as we can upgrade our machines. In such a world vendor lock-in would mean that you don’t own parts of your own body. We can’t allow ourselves to be chained to the likes of Microsoft. We can’t allow our society to remain locked in to the security disaster that Windows so fundamentally is.

At some point you have to ask yourself:
Do you want to be free?

Do you want to take back control? If so then Linux is for you.
Linux is an alternative to Windows and Mac. It’s the official unofficial third option that nobody talks about. Linux is different from Mac and Windows not just because it’s a different product, but because it’s built in a different way.

Linux wasn’t written by a single corporation. Linux was written by many different organizations and individuals from all around the world. Anyone can contribute code to the Linux kernel to add new features or fix bugs in it, and countless people already have.

If you don’t trust corporations then Linux is for you.

Linux is designed to be customizable. While other inferior operating systems let you change the background, Linux lets you decide whether or not there even is a background, or if there’s just a command line. With Linux you can change nearly anything about the system. You can change the desktop to have any kind of theme. You can tell it whether or not to update automatically, whether or not it should have one clipboard or many, whether or not all your files should be encrypted, and even whether or not it can run Windows programs.

If you want customizability then Linux is for you.

Linux respects your privacy. The kernel can’t contain code that spies on you because anyone who would add such code is legally required to make the instructions to it publicly available, and everyone would be able to see the code that spies on you and anyone could make a fork of the project that doesn’t spy on you.

If you want privacy then Linux is for you.

Unlike Windows, Linux is designed with security in mind. Whenever a security hole is discovered it is quickly patched up, and an update is released that makes the system more secure for everyone.

Microsoft Windows has security holes in it that everyone has known about for decades which Microsoft has done nothing to fix. With Linux people from all around the world are able to inspect the code and find any security holes, and then file a bug report which quickly gets handled, or even fix it themselves and send the security patch
to the Linux foundation.

If you want security then Linux is for you.

Linux isn’t just one product. There are many variations of Linux that you can choose from. You can go with Linux Mint if you want something that’s simple, Gentoo Linux if you want something that’s really advanced, Debian Linux if you want something that’s stable, Arch Linux if you want something that’s on the cutting edge, Red Hat Linux if you want something for corporate needs, or any number of other options.
Each of them is a variation on the same thing, and each of them is compatible with each other while still offering variety.

If you want choices then Linux is for you.

Linux has been optimized over decades by talented people from all around the world so that it can run on anything and everything in a way that’s fast, and efficient.
Linux is capable of running on everything from the tiny computer chip in your microwave oven all the way up to the super computers that google uses for their search engine.
From digital wrist watches to servers to super computers, from micro chips to self driving cars, from laptops to airplane autopilots, Linux runs them all behind the scenes and keeps the digital world safe and secure.

If you want speed and efficiency then Linux is for you.

And now, with help from Valve Software, Linux is gaining the ability to run an increasingly large number of video games. Someday soon Linux may be the definitive choice for video games.
The truth is that Linux is for everyone.

If you want to be free, then Linux is for you.

 

Hatsune Miku Is A Scam.

“If you can’t explain it to a six year old then you don’t really understand it.”

-Albert Einstein

What’s the worst term in your typical end user license agreement? Is it that you can’t sue the company that sold you the program? That they can remotely shut it down at any time for any reason? Is it that you sold your soul to them? No. The worst term in any EULA is the one that seems the most innocent. That you can’t reverse engineer the program.

Hatsune Miku is a scam. Here’s how the scam works.

A computer program is just a series of instructions that a computer follows.
It’s basically just commands that tell the computer what to do.

A file is just a bunch of ones and zeros that is meant to
represent information.

Let’s take a look at some real world examples of this in action:
Imagine that you own a small business. As a business owner you have a lot of documents you need to keep. It doesn’t matter what kind of business you are, you need to keep records of things.

So let’s imagine that you chose to use Microsoft Word to write those documents. Microsoft word is a computer program. It’s just a set of instructions that the machine follows. When you save your document that program follows instructions that tells it how to turn the text you’ve entered into the ones and zeros that go into a file.

When you open that file with word later on there’s another section of the instructions in Word that tells it how to translate the ones and zeros from the file into the document you see on your screen. Those instructions that make up the program are proprietary. You aren’t allowed to know what those instructions are.

That’s just how it’s licensed. As a result nobody knows how to translate word files into documents except Microsoft. So if you’ve got hundreds or thousands of documents in Word then you can’t go through and open each of them up and copy and paste the text into another word editor. That’s just not practical.


You need a word editor from another competitor that can be compatible with Microsoft word documents, but Microsoft tried to make that impossible by keeping the code for processing word documents a secret.

This is the main strategy of Microsoft. It’s called vendor lock-in. Vendor lock-in is when the vendor, in this example Microsoft, sells you a product that you become artificially reliant on, which means you’re locked in to them. This is very common in the software world. Nearly every proprietary program you’ve ever used does this. It’s gotten so bad that many people reading this article might be thinking:

“So what if you can’t open word documents in anything but Microsoft Word? It’s a word document. It’s only reasonable that you can’t open it with anything else.” But that’s not how things have to be. The point of vendor lock-in is to make it so that the vendor is not held very accountable to their customers.

That’s what vocaloid does. It’s designed so that you can’t open up vocaloid project files with anything other than vocaloid.

There is another way. There’s free software.

 

When people speak of free software, they’re referring to freedom, not price.

Free software is software that is licensed in such a way that you are legally allowed, and even encouraged to look at the instructions that make up that free computer program. Companies that write and distribute free software are held more accountable to their customers because if their customers find a bug in the product,
or want a new feature added they can send in a bug report and/or feature request, or they can hire a programmer to fix that bug and/or add that feature.

To prevent the customer from switching over to the modified
product the people who wrote the software have to care about their users.

How you make money off of free software when anyone can legally modify it, and give it away for free is really another story for another time, but there are ways for programmers to get payed while still being held more accountable to their users.

They often don’t get payed as much, but that’s partly because it’s hard to compete with vendor lock-in. Free software projects tend to be underfunded, but it doesn’t have to be that way. If there was a demand for vocaloid to be free software then it might become even more popular than it currently is.

It breaks my heart when I see people saying things like “Oh, Vocaloid is so great”
“Oh, it changed my life.”
“Oh, I’m crying every time I watch a vocaloid concert.”
“Oh, I wish Miku were real and Noah wasn’t.”

It’s okay to love an idea but not condone a dishonest business practice, because this is a corporation. They don’t care about anything other than profits. This whole community looks like a bunch of Apple fanboys if Apple fanboys were an actual cult. It’s a product that is created and controlled by a single corporation, but when a thing becomes free software it becomes something more than that. It becomes a thing that is owned and controlled by multiple corporations and a community. It becomes something that can be permanently embedded into the digital world.

The future is free.

What is functional programming?

In 1978 Intel released the 8086 processor. It was the first processor of the x86 line. It ran at 5 megahertz, had a single core, and looked more like a large micro-controller than a modern CPU. Seven years later they released the 80386 which ran at a maximum speed of 40 megahertz, and it also had a single core.

Twelve years later they released the Pentium two which ran at 450 megahertz, and still had only one core. It wouldn’t be until eight years later that they released their first processor to have two cores which ran at 3.2 gigahertz. Then 15 years later AMD released a processor that had 64 cores, and ran at about 4 gigahertz.

Computers aren’t getting much faster. They haven’t been getting much faster for over a decade, and they’re not likely to get much faster in the near future. The laws of physics, as they apply to silicon computer chips, forbids them from getting faster without exponentially increasing the power consumption. Of course research is now being done on alternate materials, but you’d still be a fool to assume that computers will get much faster any time soon.

Computers aren’t getting faster, but they are getting smaller. That means there will be more instructions they recognize, more memory and cache, and more cores. Now is the time to become good at programming systems with many cores. So here’s how to do that.

What is functional programming? In a nutshell, functional programming is programming where we minimize the percentage of functions in our code that have side effects. What is a side effect? A side effect is either accessing random numbers, or accessing a global variable, or doing any kind of I/O.

So why is there a word for this? Well because functional functions are purely deterministic. You can call them up with a set of arguments, get a return value, wait a while, and then call them up with the same set of arguments and be guaranteed to get the same return value. This, as it turns out, is a powerful assumption to be able to make about a function.

If it accesses random numbers then it’s not deterministic. If it takes input then it might return a different result when something giving it that input changes. If it’s doing output then that output operation might fail. If it accesses a global variable then the global variable might have changed since the last time the function was called.

To put it plainly a functional function does not rely on it’s environment. It’s just a subroutine that only takes in arguments, does a calculation, and returns an answer. A functional function just does a computation.

So some of the advantages of it are that functional functions are easier to use. They tend to line up with how we tend to naturally write software. More on that later. There’s less internal state to think about. You don’t need to get a whole environment set up just to call the function. You can just pass the environment to it as an argument.

It’s more consistent so bugs are more reproducible. There’s no need to worry about race conditions. This means it’s easier to debug, and therefore more stable and more secure. It’s inherently thread safe. It’s faster because it makes no system calls. It has less error checking to deal with. It’s more portable because it doesn’t rely as heavily on system specific system calls. It’s also easier to copy and paste the code into a shared library if the project gets too big.

Also if the compiler knows that a function has no side effects, then it can make some advanced optimizations to it that it couldn’t safely do to functions which have side effects. Functional code is much faster when the compiler knows what has side effects and what doesn’t.

To put it plainly, maximizing the amount of your code that’s functional has nothing but advantages to it. How often in life do we come across something that has nothing but advantages?

If we’re writing a hypothetical project, like for example an image library then don’t write your code like this:

Img getImage(string filename);


Where you write a function that takes a file name as an argument, reads that file, and then returns an image object. This means that the file can only come from the disk, and can’t be directly taken from a network connection.
Instead write your code like this:

Img getImage(byte[] fileContents);
Img getImage(channel byte[] fileContents);

where it takes in either the entire contents of the file as an argument or some kind of channel that spews out the contents of the file one block of data at a time.

Don’t write your blur method like this:

class Img {
        void blur()
        {
                // blur this instance of the image.
        }
}

where you write the method to blur that instance of the image object. Odds are that your user will want to keep the original.
Instead write it like this:

class Img {
        Img blur()
        {
                // code
                return copyOfImageButBlurred;
        }
}

where we return a copy of the image that’s exactly the same except blurred. That’s not as efficient with memory, but we don’t have to be efficient with memory now.

If you’re writing an autopilot then don’t write your code like this:

int main()
{
        // get sensor data
        // react to input
        // get more sensor data
        // react to input some more
}

where we do random stuff in whatever order we feel like.
Instead add some organization to your code by writing it like this:

int main()
{
        state := initializeState();
        while(1) {
                input := getSensorInput();
                state, commands := processInputFunctionally(state, input);
                runCommands(commands);
        }
}

where we initialize the state, and then in a loop we gather sensor data, put that, and the state information into a big function with no side effects, and it will return the new state, and commands for what to do and then we execute those commands. This way the code is more modular. The main code for the autopilot can be cleanly removed and hooked into a simulator for testing.

But there’s more to it than that. Functional programming can be taken up a notch with functional programming languages. There are programming languages specifically designed to be functional.

Let’s look at the first one: Lisp. Lisp is a functional scripting language. In lisp variables are immutable. What does that mean? Remember earlier when I told you to write the blur function so it returns a blurred copy of the image? There’s a reason for that. As it turns out, statistically, most variables are set once and then never written to again. Those variables that are written to again tend to be written to in a loop.

As it turns out you can replace loops with recursion and then do away with having to modify the variable more than once and still be Turing complete.
This way people who read your code can assume that any variable will only be set once. This makes it easier to reason about what the code is doing.

If we’re going to create a language where functions will be called up frequently then we need a new syntax that makes calling functions easier.
And here’s what that looks like.

(sqrt (add (square a) (square b)))

As you can see there’s no need for commas. All arguments are separated by spaces, and the function name goes inside the parentheses.

While that’s certainly nice Lisp lacks an important feature: declarative programming. If we’re going to be writing lots of recursive functions then we need to make that easy to do. Most recursive functions will have some kind of if statement that checks the arguments so it knows when to stop. So why not add some special syntax to do exactly that?

factorial :: Int -> Int
factorial 0 = 1
factorial x = x * (factorial (x - 1))

This is what haskell code looks like. What we do is we declare multiple versions of the function, each of which has its own patterns to the arguments that it’s expecting. Then when we call the function it will try out one pattern after another until it finds one that fits and then it will run the associated version of the function. If you want to write a function that takes the factorial of something you can do that in just three lines of code in haskell.

This is known as declarative programming. In declarative programming we give the computer a set of declarations, and the pose it a query and it uses those declarations to figure out the answer to that query.

We can use pattern matching and declarative programming for more than that. Prolog is a scripting language based on predicate logic. In predicate logic we take the set of everything in the entire universe and use functions that return true or false, known as predicates, to filter out anything that’s not an answer until we get the set of all answers, and we can do so declaritively.

Here’s what that looks like.

is_smart(noah).
is_smart(nate).
is_smart(bob).

We can issue a query to this code through an interactive prompt that will list everyone who is smart. We can also create predicates that use other predicates to filter out invalid answers.

is_smart(noah).
is_smart(nate).
is_smart(bob).
has_computer(noah).
has_computer(nate).
is_programmer(X):- is_smart(X), has_computer(X).

The way this works is through backtracking. Here is a page that explains how that works internally: http://www.amzi.com/articles/prolog_under_the_hood.htm

Prolog has another feature: atoms. An atom is like a string except that it’s, ideally, immutable. Atoms are used as messages that get sent to functions. A lot of large programming projects send information around as strings, so why not make that easier and more efficient? That’s what prolog does. Any identifier that’s not a predicate and that does not start with an uppercase letter is assumed to be an atom.

Atoms show up in other languages as well, like for example erlang.
Erlang is a programming language that was intended to be used for distributed computing systems. It’s functional, concurrent, declarative, and part of it’s declarative nature means it can use atoms to match patterns against incoming messages.

Here we see something like that:

process_http(...) ->
        receive
                {get, ...} ->
                        // foo
                {post, ...} ->
                        // bar
        end.


This erlang function, when it reaches this chunk of code, will check it’s incoming channel for messages, and it will execute code foo if the message is a tuple that starts with the atom ‘get’, and it will execute code bar if it receives a tuple that begins with the atom ‘post’.

Erlang is specifically designed to be THE solution for distributed computing systems. It’s got almost everything you could want, except for buffering.
Erlang channels are unbuffered. It’s very easy to accidentally slurp up a huge file into memory that is too large to fit in memory on the computer that the erlang routine is running on. Unlike in golang where all channels are buffered so as to protect against running out of memory, erlang coroutines can run out of memory when loading a large file.

So how do we solve this problem? Earlier I suggested that we should write functions that load up images by taking the entire contents of the image file as an argument. What if the image is too big to fit into memory?

This is where lazy evaluation comes in. Lazy evaluation allows us to build infinitely large data structures. Lazy expressions are only executed when they’re needed. The way that works is that a function which is pure can be executed later instead of right now because it will return the same result either way.


Haskell is a lazy language. Every expression in Haskell is lazily executed. This enables us to write programs that will naturally process input as much as possible while still reading it. It’s not possible to tell when a chunk of code will be run in a lazy language, but we don’t have to care because it will give us the same result in the end anyways.

fib :: Int -> Int -> [Int]
fib a b = b : (fib (b + a) a)

The above code will return the entire Fibonacci sequence in a list. It returns a cell containing the first argument as the head, and the tail is a data structure which contains a function pointer to fib and the list of arguments.

But what if we have an error lazily loading the image? This is where monads come in. A monad is a thing that we can use to check for errors, and also use it for global variables. Here’s the idea: in C if we have a function that does IO then that IO might fail, so we need to be able to return an error indicator, but the function calling THAT function also does IO because something it called does IO.

 

Ultimately what we tend to do in real life is either handle the errors if we can, or, more often, just throw the error up one until it reaches the top so the user can deal with the problem. That’s exactly what a monad does. A function which uses a monad in Haskell either returns a successful value or throws something up with an error value.

getImage :: MonadFail m => ByteString -> m Img
getImage [] = fail "Error: empty file"
getImage fileContents = do
    -- code
    return img
main :: IO ()
main = do
    c <- openFile "file.png"
    i <- getImage c
    foo i
    bar

You can also chain together monad functions so that if any step in the process fails then it will just stop right there. Here, if the function getImage is given an empty list, then it will throw up a monad of type MonadFail that contains a message indicating that it failed. The function that calls getImage can specify what data type should contain the error message or Img object.

In this case the main function uses IO as the monad type, so if openFile fails then it will get an IO type that contains the error message. If openFile doesn’t fail then getImage will be called. If getImage doesn’t fail then foo will be called with the argument i, and if that doesn’t fail then bar is called. If any of them fail then main will return an error.

Of course, monads can be used for more than just errors. If we have a data type that contains another data type, such as a list, or a binary tree, then monads make it easy to apply a function to it that doesn’t know or care about whether the data is in a tree, or list, or hash map,  or whatever.

I hope that you’ll try out some of the ideas I’ve presented here. No idea should go unchallenged forever, and I think the idea of “truly object oriented” needs to be challenged now. Unlike the vaguely defined idea of truly object oriented, functional programming has a precise definition.

With functional programming you can apply this idea to your existing code bit by bit and see some immediate improvements. Simply finding functions and methods that have unnecessary side effects, and removing those side effects (or moving them somewhere else) can greatly improve maintainability.

It can be slightly hard to get into the habit of doing things in a functional way, but it’s well worth it.

The future is functional.

The Best Intro To Linux I Could Come Up With

I often need to explain to people what Linux is, but I couldn’t find any articles or videos that I think explain what it is sufficiently well. In an earlier blog post I lazily linked to a youtube video that explains how to install Linux mint, but now I’m going to explain it in more depth.

What is Linux?

The short answer is that Linux is an operating system kernel. Most people are unfamiliar with what an operating system is, much less a kernel. So let me explain. An operating system is merely a program, or series of programs that manages the hardware and software on your computer for you. When you download and install a program that program gets registered with the operating system, and when it runs it makes requests to the operating system to do things.

A kernel is just the core program of an operating system. You can’t use just a kernel on it’s own. You need other programs to run with it. There are many operating systems that use Linux. These are called “Linux distributions” You are probably already familiar with Microsoft Windows (which is an operating system), and you may have even used Mac OS X. Unfortunately Apple and Microsoft have created some widespread misconceptions about how operating systems work so let’s clear them up.

Microsoft does not sell computers. They do sell some hardware, but they are a software company first and foremost. Their main product is Microsoft Windows. Most places where you can get computers from sell computers from different various companies (Dell, Toshiba, etc) which have Microsoft Windows already installed. This has led many to believe that Windows is a part of the computer, rather than a program running on it.

Another misconception is the idea of “Mac Vs PC”. Apple released and advertisement campaign where they compared “Macs to PCs” which created the notion that there are only two options. It didn’t help that Apple’s Mac OS X operating system only likes to run on Macintosh computers, and that Apple doesn’t like anything else running on their machines.

Linux is a series of alternative operating systems. There are hundreds of Linux operating systems (AKA distributions, or distros), but the main ones for a beginner are Ubuntu and Linux Mint. Linux distros have the stereotype of being hard to use, so Ubuntu was made by a company called Canonical to be a user-friendly Linux distro; however they were recently accused by many of having lost touch with what their users want (the same old story every time, am I right?).

Thankfully the license that Linux distros are under encourage people to look at the code behind them, modify the code, and then give away the modified version for free, or for a price. The end user license agreements for Windows and Mac OS X say that you can’t do that. Linux’s license encourages that.

How do I get Linux?

Here are the home pages of some Linux distros:

Debian was thrown in there because that’s the one I use. Installing an operating system is different from installing other kinds of programs. You’ll need to download an ISO image, and then burn it to a DVD (like people did in the good old days), or a USB drive (many old computers don’t support this but all the modern ones do).

You might also want to back up all your files, have a Windows install disc, and also have defragmented your drive. The next step will be to reboot, but before we do let’s talk about a little bit of theory here. Your computer uses what’s called a BIOS (Basic Input Output System). When the computer starts it runs the POST (Power On Self Test) where it checks the hardware to make sure it’s not missing anything. At this point it will display the logo of the company that made your computer onto the screen very briefly (unless it’s been configured not to).

In the old days the BIOS would check different things on the computer to see what operating systems it could find. It would start with the DVD, and if there was no DVD then it went on to the hard drive (where Windows was preinstalled). If there was a DVD in the tray really ancient computers would try to run what’s on the DVD as an OS, and if it was actually just a movie on the disc it would give weird errors. Later ones would be smart enough to check if there was an OS on the disk.

Unfortunately, because everyone and their brother was using Windows anyways computer stores would reconfigure the BIOS to start with the hard drive, and only check the DVD tray, or USB drive second. This means that many of the users who have little ability to solve technical problems on their own will have a hard time installing Linux.

When the logo for the company that made the computer is displayed it will tell you to press a button to reconfigure the BIOS (usually F2), most of them will also tell you which button to press to choose which device to boot from (usually F12). Some users need guides that tell them exactly what to do step by step right down to which button to press, unfortunately there’s no universal way to install an OS. You’ll need to have some ability to solve technical problems on your own (though you won’t need very much).

Now we can reboot the computer with the USB stick still in (or the DVD still in the tray). Once we’ve selected the device to boot from Linux will be running and there will be a nice installer that you can follow. You can install Linux so that it’s a replacement for Windows, or you can set up your computer to ask you whether to use Windows or Linux when it starts up (this is called “dual booting”.) The installation process is very simple for Ubuntu and Linux Mint. Debian’s is more advanced, but I have no problem with it.

Once it’s installing it will probably take a while to run, so let’s go over the advantages of Linux. Linux is far more secure than Windows is. You’ll still run into technical problems with Linux, but security won’t be one of them. There’s no need for antivirus software for Linux. Linux is also much faster, and can run on much older hardware.

Unfortunately many of the programs you use on Windows will not run on Linux without WINE. WINE allows Linux to run Windows programs, but it eliminates some of the security Linux has. Thankfully most programs for Windows have Linux alternatives which are very compatible with the Windows versions.

Something you might want for Linux is this: The Linux Command Line By William Shotts

With that you’ll be able to solve technical problems, and automate daily tasks in ways that will make you wonder how you ever managed without it.

The story of wheely bot.

Some time ago I found a video series on how to program robots right here on programming robots, and I decided to try and make one named wheely bot. This is its story.

The first version was pretty simple. I ordered a simple body for it off of the Internet, as well as a motor controller circuit, and two wheel encoders. I used an Arduino uno that I already had laying around and programmed it to rotate itself and adjust its angle based off of what the angle of the wheels were. It knew what the angles of its wheels were based off of the wheel encoders, and it used a PID controller to adjust itself.

Unlike my previous attempts at robots this one didn’t just assume that the wheels had rotated by exactly what it told them to rotate by. Unfortunately there was a lot of wiring going to the Arduino and it began to become a little bit too much. To resolve this I tried buying some small breadboards to attach to the body, and put an Arduino Nano on them to help the Uno and reduce the wiring.

The idea was pretty simple: the Nano would gather sensor data, and use it to figure out what was going on, and the uno would control things and ask the Nano what was going on via I2C. This worked, but I still had a problem that had plagued the robot from the beginning: the wheel encoders weren’t very accurate. They were always up to ten degrees off from what the actual angle of the wheels were.

To fix this I tried getting four sonars, and decided to attach two to each side, and I came up with a math formula that would allow me to figure out what its angle to the wall was given the distances that the sonars were returning. This way it could find a wall and try to adjust itself.

I had even come up with a library for automagically calculating the accuracy of a given number that I got from a calculation.

It was around this time that tragedy struck. I had been using a two rechargeable double A batteries, and a 9v battery to power it because the motor driver need 12 volts to operate, however everything else needed 5 volts, so I had been using a 5v regulator. Unfortunately everything was drawing a lot of current, and that caused the regulator to break, and, weirdly enough, put out more than 5 volts. This fried basically everything.

After some panic over losing potentially everything, I bought two new arduinos, and they broke as well. So I tried buying a DC to DC converter which could offer more current, and not break as easily as well as get some 6v rechargeable batteries that could offer more current.

Since I was tight on money I decided to use a raspberry pi zero that I had laying around as the control brain, and use the new Arduino Uno that I had as the sensor brain/motor controller.

Since programming a raspberry pi zero is different than programming an Arduino I have not yet been able to get all of my original code (located here) to work yet, but I have however added a new feature: unlike the previous wheely bot, the new one has a USB Wi-Fi adapter that connects to my home network, and allows me to remotely login to it, and remote control it.

I’ve finally gotten the remote control program working, as well as the on board camera that I had added to the raspberry pi zero. Now I finally have a remote control toy with an onboard camera like I’ve always wanted as a kid.

You can find the new and improved Wheely bot source codes (“codes” because there’s more than one program) here   ,here, and here.

There’s also a youtube video: here

How Electricity Works: Part 7 Why Some Electrical Sockets Have Three Holes

Earlier in this series I told you how voltage can be thought of like the height that electricity starts at, and that it’s “height” is relative; however height can also be measured as an absolute, and so can voltage.

You can measure the height of a hot air balloon from the center of the Earth if you really want to, but your ruler won’t do that for you unless you can get it to reach the center of the Earth.

Similarly your multimeter won’t measure the absolute voltage unless you attach it to something that actually does have zero voltage. What exactly can have zero voltage? Do insulators have zero voltage? Does a rubber balloon have zero voltage? Not if you rub it against your hair.

Voltage is what comes from charge. Whenever you have charge you have voltage, and whenever you have voltage you have charge. If high voltages can kill you by putting too much current through your heart, then that means that too much charge can kill you.

Electrons and protons are dangerous little things. A small amount of them (like the amount in a charged balloon) isn’t enough to hurt you, but if there’s too much then that can cause problems. The problems come from the fact that having too many charged things (like electrons or protons) in a small space without an opposite charge to cancel them out will cause the voltage at any point near them to be too high.

This is sort of like how having too much mass in one spot will cause their gravity to become stronger.  So how do you get those electrons/protons apart so that they’re not so concentrated and don’t have so much voltage? You let them enter a big conductor. Remember: like charges repel. If a bunch of electrons are all close to each other then they’re trying to get away from each other; however they can only move through conductors.

If you hook up a giant conductor to your giant, charged, helium balloon of death that is, for example, the size of the Earth then that will neutralize the threat. This is why any electronic with a metal case that plugs into the wall has a third prong on it.

That third bit of metal connects the metal case straight into the ground because the ground conducts electricity. If part of the circuit that connects directly to one of the two metal prongs on the plug touches the metal case, then the charge from the outlet will go into the ground, and hopefully not hurt you when you touch the metal case.