Universities and machine code

  • Thread starter Deleted member 53988
  • Start date

connchri

Member

Reaction score: 4
Messages: 28

Maelstorm,

In some other universities there are even binary coding lessons?
I can't comment from a Computer Science course POV. But I can say that, where I did my undergrad, we had a few lessons in Assembly. I've still got the old book for the x86. I bought this second hand in 2008. The unltimate aim, however, was to familiarise ourselves, along with C++, for when we went on to program Programmable Logic Controllers, debugging C code, accessing registers that represent hardware inputs/outputs, etc. It's not something I've looked at in great detail since, but if they offered this in an Electronic and Electrical engineering course, I would hazard a guess that it would be covered in greater detail, including byte code, in a Computer Science Degree.
 

Attachments

OP
OP
D

Deleted member 53988

Guest


Please make up your mind if you mean binary or assembly.
Crivens,

I meant binary coding.

I wrote:

In some other universities there are even binary coding lessons?

EDIT: Machine code is binary code, hexadecimal code, octal code...
 

Crivens

Moderator
Staff member
Moderator

Reaction score: 756
Messages: 1,705

I remember being able to read Z80 code in hex dumps, and a friend of mine wrote 6502 code directly as hex code. That is where you might start. Modern cpus have too many instructions, sometimes more than the base word count for school children... no way you read that fluently in a year. No courses will be there for that level. There are plenty of 8 bit computers around to train with, and it will be fun.
 
OP
OP
D

Deleted member 53988

Guest


Terminology, this is not the same. Machine code is binary, not all binary is machine code. Hence the 'illegal instruction trap'.
Crivens,

You say that machine code is binary and at the same time say that not all binary is machine code.

This is a contradiction.
 

ralphbsz

Daemon

Reaction score: 866
Messages: 1,398

I've been writing software professionally for ~25 years now. Typically in groups of anywhere from 5 to 300 people.

I've seen about 0.1 cases where we actually coded in binary (meaning we wrote instructions that the CPU executed, and we didn't use assembly but emitted the numeric instructions). That was a project where the only way to get the required speed (image processing on an i386, which lacked sufficient registers) was to generate a routine on the fly and execute it, and stuff the constants and pointer offsets into the instruction stream. And even this was not done by actually coding whole instructions sequences in binary. Instead we wrote sample code in C++, compiled it with assembly listing, then ran the assembly listing through awk to generate a version of the executable code that could be copied into a second C++ program as an array of integer constants (bytes that were the instructions), the modified that array programmatically, and executed it.

I've never heard of a case where the assembler is incapable of generating specific instructions. That's just insane. If that happens, deal with the people who wrote or sold the assembler harshly.

I've seen about 20 or 30 cases where we actually had to code in assembly. This only happens extremely rarely, for bizarre performance optimizations (like having to use vector instructions that the optimizer doesn't want to use, because we know better), or for using atomic instruction primitives. Even for those, we usually had compiler macros.

I have no idea whether and how instruction execution in binary (not assembly) would be taught. It may happen in EE classes, as part of processor design (which is typically a VHDL/Verilog class). It may happen for a few homework problems in a computer architecture class. Even then, it will probably not use a real-world instruction set (those are way too complex for teaching), but an old or hypothetical instruction set (like IBM 360, Intel 808x or Z80, or MIX/MMIX). In the 1980s, as part of the "operating systems" class, I had to do one or two homework problems in binary IBM 360 and Cyber 6xxx instructions, and a half dozen in IBM 360 assembly.

And if ninja_root asks another repetitive and inane question, I'll get seriously upset at him.
 

Maelstorm

Well-Known Member

Reaction score: 130
Messages: 326

Yeah, I took a computer architecture course, and the advanced one as well. In our semester project, we had to develop a 16-bit RISC CPU that was pipelined. The class is long over with, so I'll attach the SVG file for the block diagram....or not since the forum does not allow SVG extension files... But yeah, binary coding was something to see because of the signals.
 

Attachments

OP
OP
D

Deleted member 53988

Guest


To end this topic:

Currently there are octal coding and decimal coding?
 

Crivens

Moderator
Staff member
Moderator

Reaction score: 756
Messages: 1,705

No. Those are encodings. Babylonians used base 6, if my memory serves me right. You may use any base you want, but the computer uses base 2. Only intel uses 1.95 internally in the pentium fpu and p4 pipeline.
 

Maelstorm

Well-Known Member

Reaction score: 130
Messages: 326

Consider this: Any and all programs that a computer runs and data that a computer processes is in binary because a computer cannot use anything else. So, anything that can get onto a computer must be in binary format or the computer cannot make sense of it. Even on modern computers that have temperature and voltage readings (which are analog quantities) must be converted to digital (binary) format before the computer can understand it. The characters in this post are all numbers. If ASCII is used, then A = 65. It's how computers work. Everything is a binary number. No exceptions. Since humans don't deal with binary too well, the programs on the computer converts the binary data into a format that humans can easily read. I have written some of the software that does that. You mentioned octal. Octal is an old binary type format were the bits are arranged into groups of three. So you have the following if you use 421 binary:

Code:
000        0
001        1
010        2
011        3
100        4
101        5
110        6
111        7
Hexadecimal is basically the same thing as octal with grouping of 4 bits (a nibble) instead of three so the coding is 8421 binary:

Code:
0000        0        1000        8
0001        1        1001        9
0010        2        1010        A (10)
0011        3        1011        B (11)
0100        4        1100        C (12)       
0101        5        1101        D (13)
0110        6        1110        E (14)
0111        7        1111        F (15)
For decimal, you basically add up the position values where there's a 1 bit and that's the number. So 1010 has 1s in bit positions 2 and 4 which correspond to the values of 2 and 8, so 2 + 8 = 10 which is A in hex.


Note: Analog signals are continuously varying voltages/currents. Digital is fixed at either Vcc or Gnd which is voltage or no voltage with respect to ground. Devices known as ADCs (Analog to Digital Converters) convert a continuously varying analog signal into a digital number of so many bits. A 12-bit ADC has 4096 steps between 0 and whatever Vcc is. So when the input voltage is close to a step, the ADC reports the number that is associated with that step.
 

ralphbsz

Daemon

Reaction score: 866
Messages: 1,398

No. Those are encodings. Babylonians used base 6, if my memory serves me right. You may use any base you want, but the computer uses base 2. Only intel uses 1.95 internally in the pentium fpu and p4 pipeline.
At times like this, it would be good if the forum allowed to not just "like" a post by giving it a thumbs-up, but also give a laughing smiley to a post.

On a serious note: A former colleague of mine was advocating that computers should stop working in binary, and instead use trinary (each bit can store or process the values 0, 1 and 2). From a hardware point of view, this is doable, but extremely hard: every capacitor (memory cell) and transistor (gate, switch) would need to handle three voltages; but flash has demonstrated that it can be done, although the circuit elements would become a little larger. His argument was, however, not about electrical and space efficiency, but purely theoretical, and involves computer arithmetic: In number theory, there are lots of theorems that are true for odd primes (3, 5, 7, ...), and using one of those as the base of the number system would allow a lot of cross-checking and correctness proving in the arithmetic operations. Sadly, he is a former colleague, because he lost his two battles with cancer. Fortunately, he had a pleasant life (with friends, wine, ...) until near the end.
 

Crivens

Moderator
Staff member
Moderator

Reaction score: 756
Messages: 1,705

ralphbsz
Did you know that this fdiv thing was completely pointless? It was done to shrink the die and increase yield, but failed because the die area was set by the IO drivers on the border. The US outer border would not change if somehow some aliens stole nevada, for example. So no gain and a PR disaster.

When you are around here again let me know so I can invite you to a nice brew or so.
 
Top