Anyone who is interested in this kind of stuff should read Code: The Hidden Language of Computer Hardware and Software by Charles Petzold. It opened my mind more than anything I've ever read on software development has.

2:50 Well that was fun. The program below, I believe, should work on an actual computer with those instructions (I've only got an emulator..), with the two inputs hard-coded.

…okay, this code might have a couple issues – the B input must be given twice as big as the argument you want… and it can't be bigger than 56 (112 in the code after doubling), but otherwise it does work!

0 lda [d] 00011101 1 add [4] 00100100 2 sta [4] 01000100 3 lda [b] 00011011 4 jmp [e] 01101110 5 add [c] 00101100 6 sta [b] 01001011 7 lda [f] 00011111 8 add [4] 00100100 9 sta [4] 01000100 a jmp [3] 01100011 b 00000000 0 // output storage c input A d input B e out 11100000 f hlt 11111110 // aka -2

(note, the above looks better in a monospace font)

I was thinking about such things a lot lately. After I argued to myself that conditional jumps are necessary for Turing completeness, I started to think about why higher level languages decided to implement specific flow control mechanisms s.a. branching and for/while loops in particular? Well, if-else statements are pretty much self explanatory, but is there any other motivation behind for/while/do-while loops other than just these were the most common use cases programmers were using conditional jumps for? Can anyone answer that precisely, please?

make an one-instruction-cpu (also known as "ultimate RISC" architecture), build a binary lambda calculus interpreter on top of that, then rest of the stuff on top of THAT that's ought to be quite a challenge ;P also I bet with industrial equipment all of that would fit into a chip about 1-2mm(squared) big (did you know how small the actual chip inside sim cards and bank cards and etc are, AND how much they actually compute? the chips are wayy WAYY smaller than the metal contacts, and still can run programs (for example let phone check available credit without manually sending an sms or stuff, or process encrypted keys in case of bank cards, making sure you can't fake any card even if you intercept one purchase))

I haven't thought this through, but your computer might be "turing complete" (disregarding the infinite tape) without conditional jumps. If you override the instructions the computer executes based on some calculation, you will get conditional behaviour. Similar to the mov instruction, which is also "turing complete". Granted, thats much more of a hassle and much less fun than proper conditional jump instructions, but hey "turing completeness" is "turing completeness". Edit: By similar to the mov instruction I meant, that I think you could mimic the behaviour of mov. For indirect access you would use the overriding.

"[…] a sensor doesn't catch all light wavelenghts, but also because a sensor is based on hexadecimal data transport which at the moment is faulty due to the translation error of the 0 in the binary system which actually means "not 1" and not "nothing", that means that any computer to date is broken because the assumed multiplicator of 2 doesn't work, in the hexadecimal system there are 15 usable values and one is void. Why was it not necessary to bring up a 0 in the hexadecimal system? Because any number besides of "1" is not 1, obviously, there is no need for a 0 in here because in the binary system, seen relatively to the hexadecimal one, the zero simply was a placeholder for 2-16."

i'm late and not sure if anyone mentioned this before. but i thought turing was not implying an infinitely long tape. i think he only needed an arbitrarily long tape. that would mean that you're turing complete as long as you have enough ram for your program (and its computation). you don't need infinite ram…

So I guess computation means the act of computing what can be computed and we can only compute computable things. In other words computeception. Only a turing machine can compute computeceptionally so a computer isnt a turing machine but a turing machine is a computer.

Just found your channel today and it is great! In response to your "challenge" I offer you ….

http://csguy.org/miniasm.pdf

In this school project from 1977 (I was a child) I demonstrate the only COMPUTATIONAL instructions needed by software are:

1) subtract

2) branch minus or zero

from those 2 instructions and a multi-pass macro assembler I demonstrated all the instructions needed to execute the 'Julia Robinson Conjecture'.

The assembler itself is NOT of interest for this conversation but rather all the appendixes.

The appendixes show the macros (which can leverage each other); the JRC program using the two instructions and various macros; and most importantly the final expanded "machine code" comprised only of SUB & BMZ

Infinite memory is easy. You buffer it and add more as you run out. As long as you keep feeding it more you never run out. We are already doing this today as our digital libraries expand exponentially every day. The infinite is therefore bound to the will of the creator to continue creating supposing resources remain available. Ever wonder why the universe is expanding? Hint: it isn't inertia… Spacetime itself has none.

I havent been watching closely (Im a programmer, I watch youre stuff as I program 😉 ). So if Ive missed something obvious, I apologise. Anywho, my comment/idea. If I was building the board you have in front of you, Id probably just modify the adder slightly to have a counter on the board that went from the multiplier to 0 (inclusive as 0 is an easy check for completion of any multiplication) and just tell the adder to add the original number to the current sum until the counter hits 0, then send the output to your screen as you already do. Youll obviously have to mark the incoming request as a multiplication, but that wouldn't be too hard to do. And for ease, you could just use a jumper to test it out and force it into multiplication mode (basically a loop). So basically, in the example of 5*4 (something easy to start with): multiplier = 5 so set Counter to 5. Initialise Sum to 0. loop this test until counter is equal to 0: Add 4 to Sum reduce counter by 1

So basically, itll go: counter = 5, Sum+4, counter = 4, Sum+4, c=3, Sum+4, c=2, Sum+4, c=1, Sum+4, c=0 -> show sum

So that's both the coded version and hardware versions 🙂 Great series, really enjoying it. And thanks for making me think 🙂 (only a year and a bit late, lol)

Since modern computers have access to the internet, it's kinda like having access to an infinite tape, some way for the machine to be able to change something apart from itself, which is also growing. So, a computer can reference another computer able to do the calculation, thus able in effect to be able to do the calculation itself.

Actually if you can change adress of a jmp before execution than you can simulate not only if statement, but also switch/case statement. For example if jmp – 1byte command (tag + 4 bit relative adress), than you can calculate some function and then add it to adres nible( if function return 1bit than you can make conditional if, if more then switch/case). It better if you have two jumps – relative(short) and absolute(long). conditional operators in that case can be represented as function+ relative jmp+ set of a long jmp. So your original machine maybe(i don't know your jmp format) also turing complete.

Also you can add memory window to extend amount of memory you have .for example you can keep 7 fixed register +window shift register + up to 256 of 8-register pages. Unfortunately half of you code will manage that windows, but you get 2 additional kbyte!))

Non-Snarky comment: Infinite time for: DATA Static Dynamic Quantum Single static Variable dynamic Infinite quantum DATA. Snarky comment: I have nothing to add.

By watching this I feel almost like I'm back in my high school (it was a weird one that taught us stuff like truth tables, rll encoding, how to calculate carry for each bit without waiting for actual results from the register, and tons of other related things). I'm glad people are still playing with such internals as if they were some toys, not just a set of skills required by a big industry. Thank you.

wow this was explained to me so poorly in uni that I never made all the connections between all these things. And it was supposed to be one of the best unis in the world… What a shame…

So sad that Turing died so young, who knows what he would have come up with later in life had he lived to see real computers. At least he's now getting some of the recognition deserved (he's being put on the new £50 note for example).

compute anything, that is computable – well, I think it's simple: the set of all computable things is defined by the computation – it is THE definition; you can only prove, that some set is sub/super set of that one.

I thought you were going to make it pass the Turing test. That would have been a challenge with 16 B memory for program and data 🙂 But I'm impressed as it is.

I watched this whole series up to the previous video and never realized that there were 3 more!

I'm a Haskell programmer, but I was only introduced to Haskell 8 months ago. If I had seen this video and googled "lambda calculus programming" I may have found it much faster 🙂 (other languages built off lambda calculus include the ML language family and Erlang).

I also think it's very interesting that the front cover of Church's paper in this video contains a generalization of Fermat's Last Theorem. Of all the problems he could've chosen!

This should multiply a 4 bit number by a 3 bit number without a conditional jump 0: LDA 15 1: ADD 14 2: STA 15 3: LDA 6 4: ADD 7 5: STA 6 6: 01101000 | 3 bit input 7: HLT 15 8: OUT 15 9: JMP 0 10: JMP 0 11: JMP 0 12: JMP 0 13: JMP 0 14: 00000000 | 4 bit input 15: NOP

2:46 Here's my take on multiplication. Max inputs 255×112 or 255×128 if you move OUT to 0000. And, since the output is 8-bit, it's as much as you could ever need 😛 00 LDA 5

01 ADD 14

02 STA 5

03 LDI 0

04 OUT

05 JMP 15 <-counter

06 ADD 13

07 STA 14

08 LDA 5

09 ADD 15

10 STA 5

11 LDA 14

12 JMP 4

13 x 14 y <=112/128 15 255 / HLT 15 / 11111111 Hardest part was me forgetting about zeros and having to redo it a bit, but in the end it actually raised the limits slightly :3

I don't follow why you cannot multiply by just adding inside a loop. Is your contention that it this does not count as an actual multiply? Perhaps you feel you need a way to shift bits to do a proper multiply. Well that's only one (or two) more instructions. EDIT: watched more, looks like you don't have a compare.

Haven't watched the next video yet because I wanted to try my hand at the MUL program given a JC instruction. Got it in exactly 16 instructions, if you load the operands a part of the program (here I'm multiplying 3 by 5). This assumes 0-1 sets the carry bit. If it doesn't you'd just have to set the second operand to 256-5+1=251, and ADD 0 instead of SUB 0. Also, because ram is only 16b I cannibalize the first bytes of the program to store the operands/temp values meaning you can't rerun it without rewriting it. I also output the value as it's calculated to save a crucial instruction…

0: LDI 1 1: STA 0 //we've read instruction 0 by now 2: LDI 3 3: STA 1 //we've read instruction 1 by now 4: LDI 4 //multiply by 5, load 4 to get JC to work 5: STA 2 6: LDA 2 //Start of loop 7: SUB 0 8: JC 15 9: STA 2 10: LDA 1 11: ADD 1 12: OUT 13: STA 1 14: JMP 6 15: HLT

Gonna watch the next video now to see how close I got 🙂 (if you just initialize ram directly instead of what I've done you save 3 instructions, and you won't have to cannibalize your program, but where's the fun in that?

"I would encourage you to try and write a program to multiply, but I don't think it's possible." Yeah, right, I'm going to invest my time in tackling something Ben Eater says is impossible… If he can't do it, I'm pretty sure not even NASA would crack it.

Hello, might be a little late, but here is how I thought about making multiplications without JC command. Here are the memory lines and the commands

0 LDA 7 1 SUB 13 2 STA 7 3 LDA 14 4 ADD 15 5 STA 14 6 OUT 7 JMP (8+Y) 8 HLT 9 JMP 0 10 JMP 0 11 JMP 0 12 JMP 0 13 1 14 0 15 X

Y is multiplication factor, X is number to be multiplied.

It modifies address 7 which is a jump and subtracts 1 every time a multiplication is done. So it will slowly slowly change it from JMP 8+Y to JMP 8. It is very limited, since it can only multiply by 5, and will not work if multiplies by 0 – so it's sort of multiplication but actualy it's not. The possible multiplication factor can be increased by cheating an instant SUB command and if output is triggered by HLT. I have no way of testing this, since I didn't build this computer, though I really want to right now.

Sorry if this is bullfrick, i am an absolute beginner at this.

Are you a teacher? If not, you'd sure make a great one. I learned more about CS fundamentals in your 20 minutes than I did in 4 years studying the subject at university.

As just one example, there were lots of computers based on the Z80. It doesn't have a hardware multiply instruction. Does that mean they weren't computers?

I'm only three minutes in and I've done programming with limited instruction sets (8 bit pic micro). All you need is a shift left and shift right and a bit test, then you can do multiply and divide.

I think I have a program that can multiply two numbers using only the instructions that we have right now (so no conditional jumps). The idea is that we can edit the RAM, which means we can both change the storage AND the actual program itself. This means that the data can actually change how the program behaves, and with this we should be able to make a multiplication program.

This is the program that I came up with: 0: LDA 3 1: ADD 17 2: STA 3 3: JMP 31 4: LDA 3 5: SUB 17 6: STA 3 7: LDA 18 8: ADD 16 9: STA 18 10: LDI 1 11: STA 19 12: LDA 17 13: SUB 19 14: STA 17 15: JMP 0 16: X 17: Y 18: 0 19: 0 . . . 28: LDA 18 29: OUT 30: HLT 31: JMP 28

I actually needed 5 address bits, or 32 memory locations to make this work, but I still only used the instructions we have available. Lines 0-6 are essentially what are doing the conditional jump that we need for multiplication. What they do is jump to line 31 if the number stored at address 17 is 0. With this it's relatively easy to do the rest of the program, which just adds the number in position 16 to the number in position 18, and stores the sum back in position 18. At the same time, it subtracts 1 from the number in position 17, and loops back to the beginning of the program where again, it checks if position 17 is 0 and if so, goes to the last line, line 31. Then, on line 31, I just jump back a bit so I can output the answer, which is in position 18.

So, basically, positions 16 and 17 are storing the two numbers we want to multiply, and position 18 is storing the result of that multiplication.

The way the conditional jump works is that I am editing the jump statement right before I reach it. In memory, the actual jump statement will be represented as ABCD11111, where ABCD is just the code for the jump statement. Ben used 0110, so I'll also use that. The other digits are 11111 because that is the position we are jumping to, 31. Right before the program reaches the jump statement, we add the number in position 17 (which I'll call Y from now on) to the number in position 3. But, the number in position 3 is the jump statement. If Y is anything other than 0, we will be editing this jump statement. The jump statement before was 011011111. If we add anything to this, we will be changing the first four digits of the statement. As long as we add a number that is less than 5 bits we will only change the 4th bit, which means the new code will be 0111. If we just make this code do nothing, then if we add anything other than 0 to the jump statement, the line will then do nothing. Only if we add 0, which means Y = 0, will we actually jump to line 31, as then, the jump statement won't be edited. So, this works as a conditional jump statement that jumps to the last line if Y = 0 (and Y is less than 5 bits).

I'm pretty sure that this means the computer is Turing complete, even without adding a conditional jump statement, but it's just much harder to reason this out, and it's probably easier to just add a conditional jump.

So we can build an entire CPU by building multiple boards that each represent 1 cpu core?? Why not build an entire PC that can play Crysis trilogy, we can add 256GB of ram, so we can run the entire OS in a RamDisk.

Since our brains don't have infinite memory either, are we also technically not turing-complete computing machines?

However a human with a an infinite supply of pens and an infinite supply of paper would be, if storage is the only real obstacle there. The human just acts on the pointer and generates their own set of instructions based on received input. Thanks for this, really helped my understanding of Turing machines and what it actually means to be a Turing complete machine.

I am quite sure from when I was studying computer sciences that we where told that modern computers doesn't have instructions for divisions, multiplying and sustracting. It is all done with adding, so at least I consider yours a complete computer

Anyone who is interested in this kind of stuff should read Code: The Hidden Language of Computer Hardware and Software by Charles Petzold. It opened my mind more than anything I've ever read on software development has.

No need to compute, it's 42

2:50 Well that was fun.

The program below, I believe, should work on an actual computer with those instructions (I've only got an emulator..), with the two inputs hard-coded.

…okay, this code

mighthave a couple issues – the B input must be given twice as big as the argument you want… and it can't be bigger than 56 (112 in the code after doubling), but otherwise it does work!0 lda [d] 00011101

1 add [4] 00100100

2 sta [4] 01000100

3 lda [b] 00011011

4 jmp [e] 01101110

5 add [c] 00101100

6 sta [b] 01001011

7 lda [f] 00011111

8 add [4] 00100100

9 sta [4] 01000100

a jmp [3] 01100011

b 00000000 0 // output storage

c input A

d input B

e out 11100000

f hlt 11111110 // aka -2

(note, the above looks better in a monospace font)

A computer can compute the meaning of life. It's 42

16 bytes of memory is an awful limitation. How hard would it be to improve on it ? did anyone try to ?

it's not a TM but a linear bounded automaton. Still quit good

whenever I hear discussions like this I immediately think of memory limitations (infinite tape) and calculation speed (operation size/CPU)

Does it play doom?

The main issue with your pronounciation of the Entscheidungsproblem is the fact that the sequence "schei" is pronounced exactly like the english "shy"

I was thinking about such things a lot lately. After I argued to myself that conditional jumps are necessary for Turing completeness, I started to think about why higher level languages decided to implement specific flow control mechanisms s.a. branching and for/while loops in particular? Well, if-else statements are pretty much self explanatory, but is there any other motivation behind for/while/do-while loops other than just these were the most common use cases programmers were using conditional jumps for? Can anyone answer that precisely, please?

Beautiful. Loved it.

can this computer branch

What's with this lame modern way of saying I am stupid aka "i am butchering this pronounciation'

make an one-instruction-cpu (also known as "ultimate RISC" architecture), build a binary lambda calculus interpreter on top of that, then rest of the stuff on top of THAT

that's ought to be quite a challenge ;P

also I bet with industrial equipment all of that would fit into a chip about 1-2mm(squared) big (did you know how small the actual chip inside sim cards and bank cards and etc are, AND how much they actually compute? the chips are wayy WAYY smaller than the metal contacts, and still can run programs (for example let phone check available credit without manually sending an sms or stuff, or process encrypted keys in case of bank cards, making sure you can't fake any card even if you intercept one purchase))

My head got blown.

Next challenge: Make an 8-bit computer. With 256 bytes of memory!

Why not use self modifying code to do conditional computation?

This is honestly so great. Extremely great explanation. Gets me interested into topics of complexity theory. Can you make videos on this?

Does this Alan Touring 1936's paper actually "reply" to Kurt Gödel's 1931's Incompleteness Theorem?

Oh, that's why Chrome consumes so much memory, because it was designed to be ran on a Turing machine!

Turing sux. He was a fuckin' gayboi. fookin rip my guy how u gonna be gay? You don't get no girls being a gay lmaooooooooooo smh smh smh.👏 😂

I haven't thought this through, but your computer might be "turing complete" (disregarding the infinite tape) without conditional jumps. If you override the instructions the computer executes based on some calculation, you will get conditional behaviour. Similar to the mov instruction, which is also "turing complete". Granted, thats much more of a hassle and much less fun than proper conditional jump instructions, but hey "turing completeness" is "turing completeness".

Edit: By similar to the mov instruction I meant, that I think you could mimic the behaviour of mov. For indirect access you would use the overriding.

300hz is not a lot, but it is a lot for something you built yourself

To compute the meaning of life, you multiply six by seven

"[…] a sensor doesn't catch all light wavelenghts, but also because a sensor is based on hexadecimal data transport which at the moment is faulty due to the translation error of the 0 in the binary system which actually means "not 1" and not "nothing", that means that any computer to date is broken because the assumed multiplicator of 2 doesn't work, in the hexadecimal system there are 15 usable values and one is void. Why was it not necessary to bring up a 0 in the hexadecimal system? Because any number besides of "1" is not 1, obviously, there is no need for a 0 in here because in the binary system, seen relatively to the hexadecimal one, the zero simply was a placeholder for 2-16."

i'm late and not sure if anyone mentioned this before. but i thought turing was not implying an infinitely long tape. i think he only needed an arbitrarily long tape. that would mean that you're turing complete as long as you have enough ram for your program (and its computation). you don't need infinite ram…

So I guess computation means the act of computing what can be computed and we can only compute computable things. In other words computeception. Only a turing machine can compute computeceptionally so a computer isnt a turing machine but a turing machine is a computer.

Ben

Just found your channel today and it is great! In response to your "challenge" I offer you ….

http://csguy.org/miniasm.pdf

In this school project from 1977 (I was a child) I demonstrate the only COMPUTATIONAL instructions needed by software are:

1) subtract

2) branch minus or zero

from those 2 instructions and a multi-pass macro assembler I demonstrated all the instructions needed to execute the 'Julia Robinson Conjecture'.

The assembler itself is NOT of interest for this conversation but rather all the appendixes.

The appendixes show the macros (which can leverage each other); the JRC program using the two instructions and various macros; and most importantly the final expanded "machine code" comprised only of SUB & BMZ

Best of all — it all works

a computer is the 5 logics not all those functions.

Infinite memory is easy. You buffer it and add more as you run out. As long as you keep feeding it more you never run out. We are already doing this today as our digital libraries expand exponentially every day. The infinite is therefore bound to the will of the creator to continue creating supposing resources remain available.

Ever wonder why the universe is expanding? Hint: it isn't inertia… Spacetime itself has none.

In x86, one single instruction is required to be turing complete… mov

I havent been watching closely (Im a programmer, I watch youre stuff as I program 😉 ). So if Ive missed something obvious, I apologise.

Anywho, my comment/idea. If I was building the board you have in front of you, Id probably just modify the adder slightly to have a counter on the board that went from the multiplier to 0 (inclusive as 0 is an easy check for completion of any multiplication) and just tell the adder to add the original number to the current sum until the counter hits 0, then send the output to your screen as you already do. Youll obviously have to mark the incoming request as a multiplication, but that wouldn't be too hard to do. And for ease, you could just use a jumper to test it out and force it into multiplication mode (basically a loop).

So basically, in the example of 5*4 (something easy to start with): multiplier = 5 so set Counter to 5.

Initialise Sum to 0.

loop this test until counter is equal to 0:

Add 4 to Sum

reduce counter by 1

So basically, itll go: counter = 5, Sum+4, counter = 4, Sum+4, c=3, Sum+4, c=2, Sum+4, c=1, Sum+4, c=0 -> show sum

So that's both the coded version and hardware versions 🙂

Great series, really enjoying it. And thanks for making me think 🙂 (only a year and a bit late, lol)

Since modern computers have access to the internet, it's kinda like having access to an infinite tape, some way for the machine to be able to change something apart from itself, which is also growing. So, a computer can reference another computer able to do the calculation, thus able in effect to be able to do the calculation itself.

Is it computable?

If it can be used to show me pictures of cute cats: Yes

If not: No

you are so good at explaining, i read turing machine on wiki several times (for leisure) but i understand nothing. You make it so much simpler

You are freaking awesome! I was researching this stuff but you made me understand it so much better 🙂 Thanks 🙂

A computer already calculated the meaning of life. It’s 42

as being a programmer, I wanna say It is missing *if-jmp*, do that with read and write and it will compute almost anything commutable.

3:14

Slow down, dude, I'm no Plato

BIG LIKE

Is your

8 bit computerbased on or a copy of themicrocomputer kit?Gigatron TTLJust saw a video about it in The 8 bit Guy channel.

Anything that can implement a nand gate or a nor gate (either one alone) is theoretically Turing complete, assuming infinite memory.

I immediate see: conditional jumps is the key.

Actually if you can change adress of a jmp before execution than you can simulate not only if statement, but also switch/case statement.

For example if jmp – 1byte command (tag + 4 bit relative adress), than you can calculate some function and then add it to adres nible( if function return 1bit than you can make conditional if, if more then switch/case). It better if you have two jumps – relative(short) and absolute(long). conditional operators in that case can be represented as function+ relative jmp+ set of a long jmp.

So your original machine maybe(i don't know your jmp format) also turing complete.

Also you can add memory window to extend amount of memory you have .for example you can keep 7 fixed register +window shift register + up to 256 of 8-register pages. Unfortunately half of you code will manage that windows, but you get 2 additional kbyte!))

/ɛntˈʃaidʊŋspʁoˌbleːm/ you're welcome 😛

Non-Snarky comment:

Infinite time for: DATA

Static

Dynamic

Quantum

Single static

Variable dynamic

Infinite quantum

DATA.

Snarky comment: I have nothing to add.

Isnt your mac book almost a million times faster? Not 10 million?

By watching this I feel almost like I'm back in my high school (it was a weird one that taught us stuff like truth tables, rll encoding, how to calculate carry for each bit without waiting for actual results from the register, and tons of other related things). I'm glad people are still playing with such internals as if they were some toys, not just a set of skills required by a big industry. Thank you.

You should modify this and your video card to play the original doom

wow this was explained to me so poorly in uni that I never made all the connections between all these things. And it was supposed to be one of the best unis in the world… What a shame…

once again you have brightened my day <3 ty

So sad that Turing died so young, who knows what he would have come up with later in life had he lived to see real computers. At least he's now getting some of the recognition deserved (he's being put on the new £50 note for example).

It's Turing complete when you run Doom on it.

And you have a video card now, so…

Whoa, this is crazy.

my guess at 3:15 – branch, add, move

– branch when a register is 0 / or not

– add, including negative numbers

– move to and from a place in memory

it would be incredibly slow…. but I actually think it might be all one needs…

compute anything, that is computable

– well, I think it's simple: the set of all computable things is defined by the computation

– it is THE definition; you can only prove, that some set is sub/super set of that one.

I thought you were going to make it pass the Turing test. That would have been a challenge with 16 B memory for program and data 🙂 But I'm impressed as it is.

I watched this whole series up to the previous video and never realized that there were 3 more!

I'm a Haskell programmer, but I was only introduced to Haskell 8 months ago. If I had seen this video and googled "lambda calculus programming" I may have found it much faster 🙂 (other languages built off lambda calculus include the ML language family and Erlang).

I also think it's very interesting that the front cover of Church's paper in this video contains a generalization of Fermat's Last Theorem. Of all the problems he could've chosen!

This should multiply a 4 bit number by a 3 bit number without a conditional jump

0: LDA 15

1: ADD 14

2: STA 15

3: LDA 6

4: ADD 7

5: STA 6

6: 01101000 | 3 bit input

7: HLT 15

8: OUT 15

9: JMP 0

10: JMP 0

11: JMP 0

12: JMP 0

13: JMP 0

14: 00000000 | 4 bit input

15: NOP

2:46 Here's my take on multiplication. Max inputs 255×112 or 255×128 if you move OUT to 0000. And, since the output is 8-bit, it's as much as you could ever need 😛

00 LDA 5

01 ADD 14

02 STA 5

03 LDI 0

04 OUT

05 JMP 15 <-counter

06 ADD 13

07 STA 14

08 LDA 5

09 ADD 15

10 STA 5

11 LDA 14

12 JMP 4

13 x

14 y <=112/128

15 255 / HLT 15 / 11111111

Hardest part was me forgetting about zeros and having to redo it a bit, but in the end it actually raised the limits slightly :3

I don't follow why you cannot multiply by just adding inside a loop. Is your contention that it this does not count as an actual multiply? Perhaps you feel you need a way to shift bits to do a proper multiply. Well that's only one (or two) more instructions.

EDIT: watched more, looks like you don't have a compare.

Haven't watched the next video yet because I wanted to try my hand at the MUL program given a JC instruction. Got it in exactly 16 instructions, if you load the operands a part of the program (here I'm multiplying 3 by 5). This assumes 0-1 sets the carry bit. If it doesn't you'd just have to set the second operand to 256-5+1=251, and ADD 0 instead of SUB 0. Also, because ram is only 16b I cannibalize the first bytes of the program to store the operands/temp values meaning you can't rerun it without rewriting it. I also output the value as it's calculated to save a crucial instruction…

0: LDI 1

1: STA 0 //we've read instruction 0 by now

2: LDI 3

3: STA 1 //we've read instruction 1 by now

4: LDI 4 //multiply by 5, load 4 to get JC to work

5: STA 2

6: LDA 2 //Start of loop

7: SUB 0

8: JC 15

9: STA 2

10: LDA 1

11: ADD 1

12: OUT

13: STA 1

14: JMP 6

15: HLT

Gonna watch the next video now to see how close I got 🙂 (if you just initialize ram directly instead of what I've done you save 3 instructions, and you won't have to cannibalize your program, but where's the fun in that?

"I would encourage you to try and write a program to multiply, but I don't think it's possible." Yeah, right, I'm going to invest my time in tackling something Ben Eater says is impossible… If he can't do it, I'm pretty sure not even NASA would crack it.

Please Hindi subtitles

3.38 Didn't Deep Thought already calculated that?

Hello, might be a little late, but here is how I thought about making multiplications without JC command. Here are the memory lines and the commands

0 LDA 7

1 SUB 13

2 STA 7

3 LDA 14

4 ADD 15

5 STA 14

6 OUT

7 JMP (8+Y)

8 HLT

9 JMP 0

10 JMP 0

11 JMP 0

12 JMP 0

13 1

14 0

15 X

Y is multiplication factor, X is number to be multiplied.

It modifies address 7 which is a jump and subtracts 1 every time a multiplication is done. So it will slowly slowly change it from JMP 8+Y to JMP 8. It is very limited, since it can only multiply by 5, and will not work if multiplies by 0 – so it's sort of multiplication but actualy it's not. The possible multiplication factor can be increased by cheating an instant SUB command and if output is triggered by HLT. I have no way of testing this, since I didn't build this computer, though I really want to right now.

Sorry if this is bullfrick, i am an absolute beginner at this.

Es wird Entscheidungsproblem ausgesprochen….ent schydungs problem….;)

Are you a teacher? If not, you'd sure make a great one. I learned more about CS fundamentals in your 20 minutes than I did in 4 years studying the subject at university.

I thought I read something that back in the day people were called computers if they computed stuff.

As just one example, there were lots of computers based on the Z80. It doesn't have a hardware multiply instruction. Does that mean they weren't computers?

A program is built atop "If"s.

I'm only three minutes in and I've done programming with limited instruction sets (8 bit pic micro). All you need is a shift left and shift right and a bit test, then you can do multiply and divide.

“We wouldnt expect a computer to be able to compute the meaning of life”

Asimov starts sweating nervouslyExcellent ♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️♥️

I think I have a program that can multiply two numbers using only the instructions that we have right now (so no conditional jumps). The idea is that we can edit the RAM, which means we can both change the storage AND the actual program itself. This means that the data can actually change how the program behaves, and with this we should be able to make a multiplication program.

This is the program that I came up with:

0: LDA 3

1: ADD 17

2: STA 3

3: JMP 31

4: LDA 3

5: SUB 17

6: STA 3

7: LDA 18

8: ADD 16

9: STA 18

10: LDI 1

11: STA 19

12: LDA 17

13: SUB 19

14: STA 17

15: JMP 0

16: X

17: Y

18: 0

19: 0

.

.

.

28: LDA 18

29: OUT

30: HLT

31: JMP 28

I actually needed 5 address bits, or 32 memory locations to make this work, but I still only used the instructions we have available. Lines 0-6 are essentially what are doing the conditional jump that we need for multiplication. What they do is jump to line 31 if the number stored at address 17 is 0. With this it's relatively easy to do the rest of the program, which just adds the number in position 16 to the number in position 18, and stores the sum back in position 18. At the same time, it subtracts 1 from the number in position 17, and loops back to the beginning of the program where again, it checks if position 17 is 0 and if so, goes to the last line, line 31. Then, on line 31, I just jump back a bit so I can output the answer, which is in position 18.

So, basically, positions 16 and 17 are storing the two numbers we want to multiply, and position 18 is storing the result of that multiplication.

The way the conditional jump works is that I am editing the jump statement right before I reach it. In memory, the actual jump statement will be represented as ABCD11111, where ABCD is just the code for the jump statement. Ben used 0110, so I'll also use that. The other digits are 11111 because that is the position we are jumping to, 31. Right before the program reaches the jump statement, we add the number in position 17 (which I'll call Y from now on) to the number in position 3. But, the number in position 3 is the jump statement. If Y is anything other than 0, we will be editing this jump statement. The jump statement before was 011011111. If we add anything to this, we will be changing the first four digits of the statement. As long as we add a number that is less than 5 bits we will only change the 4th bit, which means the new code will be 0111. If we just make this code do nothing, then if we add anything other than 0 to the jump statement, the line will then do nothing. Only if we add 0, which means Y = 0, will we actually jump to line 31, as then, the jump statement won't be edited. So, this works as a conditional jump statement that jumps to the last line if Y = 0 (and Y is less than 5 bits).

I'm pretty sure that this means the computer is Turing complete, even without adding a conditional jump statement, but it's just much harder to reason this out, and it's probably easier to just add a conditional jump.

the question now is, what is the full instruction set, i cannot find it on the website

3:37 42

喔

A computer can't compute the meaning of life, but I'm pretty sure it can output the answer to life, the universe, and everything.

Computers back in the day used a math co-processor for basic maths! why not add a co processor to yours?

The meaning of life must not be computed. It's 42. Look, no Computer 😉 Just kidding.

Very cool☑️

Thank you very much ben. You'r really genius

you can modify the program while it is running with add and sub

Next video = Building an IBM Watson !!

Isn't the tape itself is the infinite memory of TM?

Well… Does it compute? If so, I'd probably call it a computer…

So…is P=NP then ?

Multiplication is just recursive addition. Division is recursive subtraction. Pretty simple.

So we can build an entire CPU by building multiple boards that each represent 1 cpu core??

Why not build an entire PC that can play Crysis trilogy, we can add 256GB of ram, so we can run the entire OS in a RamDisk.

I’m already making bitcoin with my SAP-1!!

keeps adding 3….

Did you know that Church was Turing's doctoral thesis advisor?

But can you tell the computer to create a task that the computer itself cannot run?

Making a computer: Cruel programist's thesis

71 thumbs down ! WTF is that about ?

Since our brains don't have infinite memory either, are we also technically not turing-complete computing machines?

However a human with a an infinite supply of pens and an infinite supply of paper would be, if storage is the only real obstacle there. The human just acts on the pointer and generates their own set of instructions based on received input.

Thanks for this, really helped my understanding of Turing machines and what it actually means to be a Turing complete machine.

My definition of a computer, is any mechanism that can take an input and give an output.

I am quite sure from when I was studying computer sciences that we where told that modern computers doesn't have instructions for divisions, multiplying and sustracting. It is all done with adding, so at least I consider yours a complete computer