1. Advertising
    y u no do it?

    Advertising (learn more)

    Advertise virtually anything here, with CPM banner ads, CPM email ads and CPC contextual links. You can target relevant areas of the site and show ads based on geographical location of the user if you wish.

    Starts at just $1 per CPM or $0.10 per CPC.

What is Machine, Assembly and High label Languages?

Discussion in 'Programming' started by Atik Hasan, Jun 29, 2015.

  1. #1
    I need the three types of programming language definition with example.

    1) Machine label languages.
    2) Assembly label Languages.
    3) High label Languages.

    Can someone please explain these terms.
     
    Solved! View solution.
    Atik Hasan, Jun 29, 2015 IP
  2. #2
    You can google them, change label to level

    1) Machine level languages.
    2) Assembly level Languages.
    3) High level Languages.

    Machine is language is usually the lowest level of programming, the difficulty is really high and if you were to create something it would probably take a lot of time compared to high level languages. http://www.webopedia.com/TERM/M/machine_language.html

    Assembly is a step up but still really difficult https://en.wikipedia.org/wiki/Assembly_(CLI)

    High level languages (https://en.wikipedia.org/wiki/High-level_programming_language) are the more common programming languages on the forum, C# or Python for example. These high level languages usually compile into lower level languages or machine code. Think of high level languages of someone basically making functions from lower level languages that can perform multiple things as one function. The programming and understanding will usually be easier in high level languages in addition to usually being faster, however high level languages can sometimes lead to slower or code with a lot of extra luggage that is not needed which is why you may use lower level languages if you are trying to make something as efficent as possible perhaps due to limitations. Not saying that it is not possible to make something just as efficient in a high level language but it can vary. High level languages may also take care of things like garbage collection automatically.
     
    Anveto, Jun 29, 2015 IP
    deathshadow likes this.
  3. sarahk

    sarahk iTamer Staff

    Messages:
    28,500
    Likes Received:
    4,460
    Best Answers:
    123
    Trophy Points:
    665
    #3
    homework?
     
    sarahk, Jun 29, 2015 IP
    deathshadow likes this.
  4. zinist

    zinist Banned

    Messages:
    91
    Likes Received:
    3
    Best Answers:
    0
    Trophy Points:
    43
    #4
    1)Machine Level Language are those language which is understood by computer
    2)Assembly language implements a symbolic representation of the machine code needed to program a given CPU architecture.
    3)A High-level language is a computer programming language that isn't limited by the computer, designed for a specific job, and is easier to understand.
     
    zinist, Jun 30, 2015 IP
  5. deathshadow

    deathshadow Acclaimed Member

    Messages:
    9,732
    Likes Received:
    1,998
    Best Answers:
    253
    Trophy Points:
    515
    #5
    For the most part once you change "label" to "level" as others have, the answers given so far are accurate, but don't quite paint the full picture. This is likely because while most of the above posters know the words, they've never actually dealt with it. Generally speaking you don't use "level" with the first two -- and you are missing one "more recent" level distinction in use today that I still have difficulty with (despite being in common use for some 25 years). As such, I list it as Machine Language, Assembler Language, Low Level Languages, and then High Level Languages... and I would include what some are now calling "intermediate level" as the lines are being blurred.

    As someone who has hand assembled his own machine language and entered it one bit at a time on toggle switches, lemme paint a broader picture for you.

    Machine language (you don't say "level" for this)

    This is the native binary code that a processor understands. Not all processors have the same native machine language code -- code written for a Motorola 6502 won't run without a complete rewrite on a Intel 8088 -- code written for native operation on a ARM processor won't run on a i7. Machine language is specific to the hardware it is written to. Even when they have the same commands, the actual physical binary code to call that command is different.

    Even the order in which data is stored can be different; Google "endianness" to see how which byte in a set of data is treated as "most significant" or "least significant" to see the headache that just dealing with Motorola legacy vs. Intel legacy processing can bring into things... You can end up with 0x0100 hexadecimal as a 16 bit word being stored in memory as 0x00 0x01 on "little endian" and 0x01 0x00 as "big endian". One little, two little, three little endian, four little endian bytes...

    It gets even more confusing as not all processors even use the same bit-width for commands or data.

    "complex instruction set computing" or "CISC" processors typically have operations that can range in bit-width from 8 to as much as 64 bits. This is more efficient for caching or moving commands from memory, and allows for a wider-range of hardware implemented commands and the simpler writing of machine language (and by extension assembler) -- but does so at the cost of more electricity and more transistors inside the processor.

    "reduced instruction set computing" or "RISC" processors generally have a fixed bit-width for commands, this "orthagonal" processor design reduces the number of transistors needed and is then typically more power efficient per clock, but can be very inefficient at memory access to the code (since processor execution ends up lock-in-step with memory), harder to cache (you can fit less code in the same amount of cache), and thanks to the "reduced instruction" part usually means the developer has to write more code to do the same job.

    The laugh being BOTH actually won the processor format war in Intel-land. Modern Intel and AMD x86 / AM64 legacy processors are typically CISC on the outside, with a translation matrix to run the CISC code on a internal RISC design; either by direct translation or the use of "microcode" -- smaller machine language code for the CISC processor called by the RISC interpreter.

    Really the difference between them comes down to an old joke -- "RISC is designed for people who write programs. CISC is designed for people who write compilers".

    Gets even more fun if you go back in time and start dealing with processors where a byte is a different bit-width; see many DEC processors that were 6 bits per byte, or operated on triples of nybbles. (A nybble is 4 bits, aka half a "modern" byte)

    Assembly

    Isn't actually a different language, it's STILL machine language (for the most part) it's just instead of saying each operation in binary you use a "mnemonic" -- a short easy to remember word or abbreviation -- to represent that binary opcode. Assembler does add some things like "labels" which create memory offsets you can easily refer to instead of having to manually calculate a point in memory ahead of time, and more advanced assemblers add "macros" to let you repeat or generate code that, well.. repeats or is used a lot... but at it's core it's just a human readable version of the binary machine language code.

    For example, let's take something simple like just blanking the screen in 80x25 plaintext CGA/EGA/VGA mode on a 8088 (aka 16 bit "real mode" on 80286/newer processors), and write it up as both it's Assembler and Machine language hex/binary values. Semi-colons in most compilers means "comment to end of line" -- I'll comment this too for clarity, and use hexadecimal for each byte since binary can get a bit... hard to follow.

    Processors have "registers" -- depending on the processor family and operation mode this can be as few as four up to 64 or more. On intel hardware certain registers serve certain purposes which simplifies certain operations in the silicon, and IMHO simplifies code legibility -- people who favor the Motorola approach... well, it's like the Apple vs. PC thing. You have fans of each and they rarely see eye-to-eye.

    I'll try to document this as best I can without getting too complex.

    	mov  ax, 0xB800 ; the segment of memory video is at
    	mov  es, ax     ; you cannot set es with a direct value,
    	                ; have to use another register
    	xor  di, di     ; cheap trick to zero a register is to xor it by itself
    	mov  cx, 0x07D0 ; there are 2000 words
    	mov  ax, 0x0720 ; 0x07 is light grey on black, 0x20 is the space character
    	rep  stosw      ; repeat CX times store AX at ES:[DI], add 2 to DI
    Code (markup):
    Basically this points ES:[DI] (a segment and a pointer into that segment) at video memory, and writes 2000 words of attribute/character pairs to that memory location. CX is the 'counter' that operations like REP or LOOP uses to say how many times to do something. STOSW is a "string write command" that basically says write a word (stosb does bytes) and increment the index pointer.

    "assembled" into 8088 machine language stated in hexadecimal that's:
    B8 00 B8 8E C0 31 FF B9 D0 07 B8 20 07 F3 AB

    Lemme comment that so you can see the relationship of bytes to opcodes:
    B8 00 B8 ; mov  ax, 0xB800
    8E C0    ; mov  es, ax
    31 FF    ;	xor  di, di
    B9 D0 07 ; mov  cx, 0x07D0
    B8 20 07 ; mov  ax, 0x0720
    F3 AB    ; rep  stosw
    Code (markup):
    Quite literally assembly is just a 1:1 replacement of bytes with text mnemonics. It is NOT actually a different "language" -- It's called assembly because you either have to by hand look up every opcode you use to "hand assemble" it, or use a program called an "assembler" to do that for you.

    High Level Languages

    Used to be anything that wasn't machine language or assembler was called "high level", and "low level" if used referred to machine language and/or ASM -- but that changed about thirty or so years ago as compilers became more and more common-place and certain programming languages were given more and more access to the hardware on which it was running. I still get strange looks for calling C a "high level language" just because I still think the old definition.

    High level languages operate on the principle of a codebase that can be run on any processor by way of an intermediate piece of software that interprets it into native code, or uses native code to "interpret" it in realtime.

    Interpreters are the most common high level language, be it interpreting the text in realtime, or compiling that language to an intermediate 'bytecode' that is distributed instead. Most ROM BASIC's used an intermediate bytecode that represented the language commands just so they could use array lookups for the command's offset in memory, and so that programs written in BASIC could be stored in less memory. Other interpreted languages use an intermediate bytecode that represents smaller operations that can be strung together that correspond to native commands on many (but not all) possible processor targets. P-Code, what Pascal was originally created to use, was a bytecode for a fictional processor target that the interpreter would turn into the appropriate native operations -- and was the inspiration for a great many future languages including modern ones like Java and .NET. Many use the term "virtual machine" for those newer versions of the same thing; I have a certain distaste for that as it reeks of "new term for something we had for twenty years before anyone ever called it that" -- but 'tis popular since it's easier to say than "bytecode interpreter with a just in time compiler"

    With an interpreted language so long as there is an "interpreter" for the language on the processor target, you can run the program. It meant the application could be written once, you just needed to write a new 'interpreter' for each processor that came along.

    Low Level Languages

    Generally speaking compiled languages are those like C that have their code turned into native machine language. They act as a universal 'intermediary' for all processor targets. People started calling them low level when it became practical to start writing things like Operating systems and device drivers in them. Much as how with an interpreted language all you need to go between processors is to have an interpreter written, for a new processor target all you need with a compiled language is a new compiler.

    Unlike interpreted languages which distribute the source code as the program, or an intermediate bytecode, compiled / low level languages distribute a machine language binary as the program. As such if you try to run that compiled binary on a processor it isn't designed for, it's not going to work.

    MOST programs distributed as binary executables are created to the "lowest common denominator" processor in a "family". Many times existing processor families like x86, AM64, PPC or ARM have features or extensions added to newer processors -- if you compile the program to use those extensions it won't work on older processors, so few programs are distributed in binary form to truly use the 'full capacity' of the latest and greatest processors. This is actually what leads to systems like Gentoo where the entire OS and all the software contains the source, and is compiled for the EXACT processor you are going to run it on using any and all available processor optimizations, intead of a more generic legacy style.

    Intermediate Level

    This is a term I've heard being thrown about to explain how a number of bytecode interpreters are now supplemented or even supplanted by "Just in Time" compilers. JIT compilation is just what it sounds like, the program is distributed as source or an intermediate bytecode just like an interpreted language, but is compiled to machine language AT or even DURING run-time. This increases the overhead and can make the startup of software slower, but can let the program once up and running as fast as a compiled language -- which by extension can make it faster than a generic compiled executable since processor specific extensions you couldn't put in a 'generic' family binary can be added to the code.

    To sum up:

    High Level - Interpreted languages are slow because they are abstractions running atop abstractions, and quite often restrict what you can do for hardware access. Their biggest advantage is portability and ease of use for the programmer, but can also bring a level of security by restricting what the programmer can and cannot do... NOT ALWAYS true though since some languages (like PHP) are "insecure by design" in the name of trying to make it easier to use. Biggest problem is you MUST have the interpreter installed on the system to run the program... it's a program in a program.

    Low Level - Compiled languages are turned into machine language, but can NEVER be as efficient as well written processor specific machine language, even best optimized compiler cannot make the proper decisions as the languages themselves are an abstraction. Again, advantage is portability between processor targets, but that software is usually distributed as machine language binaries you need the source to be portable. Because they can compile to a native executable, they are "standalone" and you don't need anything else to run that executable.

    Intermediate Level - Tries to combine the best of both, but sadly brings the worst of both to the table as well. They can result in better execution times than compiled languages, but they can also have disastrously bad startup times, significantly larger distributions, and like Interpreted languages require that their interpreter, virtual machine, JIT compiler or whatever-else you want to call it installed.

    Machine Language / Assembly -- same thing, just Assembly is a text representation of the binary code with some extra things like labels and macro's thrown in to make it easier to create. An "assembler" is just a compiler that translates that text representation into the native code.

    Anyone telling you a compiled, interpreted or VM can be faster than optimized processor specific native code is full of ****, and knows **** about ****. BUT, processor specific native machine language is very much a rarity as a generic "family" code is the norm.

    "back in the day" -- and even for some projects on lesser hardware today, a common approach is to mix the techniques; many libraries for languages like C have machine language for optimizing routines like string manipulation, disk reading or memory moves on certain processor targets. Programmers will use machine language to optimize "inside the loop" where software NEEDS to be fast as possible, but then use a high level language to "glue" that machine language together.

    I actually use that approach in my retrocomputing projects where I mix Pascal or C with assembler.
    http://www.deathshadow.com/pakuPaku

    Which was the only way to make a 16 color CGA Pac man ripoff that would run on an original IBM 5150 at the full flat out rate. I'm further optimizing (ok, I tossed the codebase and started over from scratch) that machine language so that a 128k PCJr (which is roughtly 2/3rds the speed of a regular IBM PC despite both running 4.77mhz -- it's a memory speed issue) will be able to run the game at full speed. If not for several prolonged hospital stays, version 2.0 would have come out by now!

    One final word -- the lines are being blurred even further as a lot of the "bytecode" used in some interpreters are starting to have processors or processor extensions created that will run that bytecode AS MACHINE LANGUAGE... directly, as in the arbitrary intermediate bytecode of an interpreted language was used as the template of a native machine language. Jazelle DBX for ARM processors for example is hardware designed to directly run Java Bytecode without an interpreter or the need for runtime compilation. DBX (direct bytecode execution) is an interesting development with some very interesting possibilities moving forward. Instead of machine language being a result of silicon efficiency or power consumption design, we're seeing a new generation where the programming language or intermediate bytecode for such is the starting point. This is going to bring the ease of development of high level languages the speed and efficiency of machine language -- at least on processors that support it.

    We're also seeing other languages now compiling to other language's bytecode; FPC (free pascal) for example can now compile to JVM... a laughable state of affairs since JVM is was inspired by the P-Code which drove the original pascal flavors. Some languages that were created to be interpreted or bytecoded also have compilers -- one of the first to gain a compiler was Pascal as bytecode was slow and inefficient -- one of the most influential Pascal compilers being Borland's Turbo Pascal -- which continues to this day under the name "Delphi". Pascal, a "learning language" many people scoff at is alive and well, with everything from Skype to WinRar in fact having been created in it!

    Basically, the various terms of "levels" are starting to lose their meaning as the languages themselves can cross those lines.

    I know that's probably more than you expected in an answer, but hey, you asked. As I often say the TLDR Twitter generation mouth-breathers can piss right off.
     
    Last edited: Jun 30, 2015
    deathshadow, Jun 30, 2015 IP
    PoPSiCLe likes this.