Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 3. What this means is, you can map almost any command in C into a specific command or a few specific commands in assembly.

Is that really true? It depends very much on your compiler and on the optimizations that it uses. I'd say that the level of optimization performed is proportional to the similarity between input and output. Straightforward translation makes for crappy optimization.



The important point is that a C command will result in a deterministic amount of actual commands.

For example, C won't implement the "raise to nth power" operator (in Python, 210 means 2 to the power of 10, for example), because it can't be done with one assembly instruction, only with a loop.

Even with optimizations, you can reason about your code as if every C command is one Assembly command, and you won't be far off (you can say that every C command is O(1) assembly commands). This is very different from other languages.


Not sure I buy that.

C has a division operator, despite the fact that plenty of CPUs (ARM, for a common example) don't have a divide instruction. Typically, the compiler generates a call to a runtime support library (e.g. libgcc) with division functions for various datatypes, and yes, those functions typically involve some looping.


That's true, and it's also a bit of a nitpick. How many of us actually work on multiple architectures or with different C compilers? For most systems-level stuff, you are simply using gcc on x86 linux, with plain-vanilla glibc, so honestly, I know what that div op is going to map to. If you've been at this for a while, you know what to expect when you write a simple for loop vs some pointer arithmetic. And if you happen to be an ARM guy, you probably are familiar with all the quirks of your platform as well.


I agree that experience will develop an intuition for how C statement map to assembly. But experience is no replacement for standards -- no implementation defines the language. In most cases, it's more useful to reason about the guarantees within the language rather than inferring what the compiled assembly will be. Possible? Yes, if as you said development is on homogeneous platforms. Painful? If you need to know how 5 compilers act instead of 1, I suspect so.


The point is not to know the actual assembly. The point is that you'll never run an operation that might take a non-deterministic amount of time to run.

Contrast to, say, Python (as an obviously exaggerated counterexample). Just calling a function in Python means performing a lookup in a hash table.


Dividing one 32-bit number by another is a constant-time operation even if on some machines (e.g. ARM) it is implemented as a loop. This is what matters. Raising a number to the nth power is an O(log n) operation.


Division takes a data-dependent number of cycles on all x86-64 processors.


But the dividing instruction is still O(1).


To decode? Why does the user care how long it takes to decode, they care how long it takes to run. We can provide upper bounds for exp, sin, and erfc too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: