Other count digits recursive

For a java only version i must dig a bit deeper. Because there is a multitude of libraries/classes.
The output of an operator in one library cannot be used as input for a function in other library etc...
I beg to differ. Most languages support 'Type conversion API'.
For example:
Java:
char a = "1";
char b = "2";
printf(a+b); // will output "12"
printf(atoint(a) + atoint(b)); // will output "3"
printf(atoint(a+b)); // will output "12". Figuring out why not 3 is left as an exercise for the reader.
Java has awareness of scope (global vs. local variables).
 
Off-course. Nothing wrong with google, nor with ctrl-c , ctrl-v. I'm as much the owner of the code as i am of the clojure compiler itself.
You can't expect to link all sources, when the sources are the first results of the most basic google search, contain merely 5 lines of code, and have not a new invention.
What if the editor in wikipedia took the code from a book ?
Note : When i paste larger pieces of code & text with new ideas, i always cite the author's as form of courtesy.
Something more interesting I've read the JVM is not capable of detecting itself possible tail-call-optimisation.
 
No, your wrong. Where did i put a claim to have written something myself ?
Look at this page,
Where on that page do you see the editor of the page referencing the books he has read in order to write the article ?
Where on that page does the author claims to be the inventor of the code written ?
Please also note a wikipedia has a more "permanent" character compared to a "temporal" forum.

Here is code , i've written code, i've copied code, one can be clearer right , but only a lawyer would fall for it ?
 
Not to hijack the thread, but I have a question for software engineers -- which I'm not. I'm wondering which one is more efficient (Big O), in C :
Counting digits using Recursion, or using Natural Logarithm Common Logarithm (base 10) library function.
 
How is a logarithm implemented ?
It could be f(x)=a+bx+cx²+dx^3 , and then you have to do the additions and multiplications. O(nlog(n)) ? Probably the least efficient.
I would go for something like a constant-stack-size tail-optimized recursion (with accumulator). O(n)

PS1: When numbers are stored in binary format you can do some form of binary search towards the first "one-bit".
PS2: When numbers are stored in "exponential/double/float format" , the time is clearly O(1).
 
  • Thanks
Reactions: a6h
I think that what Jose is quibbling on a bit of a gray area... there's enough code to demonstrate an idea from the language, to demonstrate an understanding of the concepts - but not enough to do anything particularly useful on its own. I sometimes post code snippets to show an understanding of an API - there's lots of places that code could have come from. I never claim to have invented an idea. I sometimes do quote where I got it from, if there's a debate on the correct way to do something.
 
How is a logarithm implemented ?
It could be f(x)=a+bx+cx²+dx^3 , and then you have to do the additions and multiplications. O(nlog(n)) ? Probably the least efficient.
I would go for something like a constant-stack-size tail-optimized recursion (with accumulator). O(n)

PS1: When numbers are stored in binary format you can do some form of binary search towards the first "one-bit".
PS2: When numbers are stored in "exponential/double/float format" , the time is clearly O(1).
Please don't confuse Big O complexities with actual logarithmic functions. Big O complexities merely fit a curve that can be described by a logarithmic equation. But actually implementing the math formula that is a logarithmic equation... 🤦‍♂️ a logarithm is the inverse of the power function... 😩 so your example is mathematically incorrect.
 
Logarithm is used alot in big-O notation and time-complexity of algorithms (eg sort).
O(n^n)>O(n!)>O(e^n)>O(n^2)>O(n.ln(n))>O(n)>O(ln(n))>O(1)
Here we want to use the big-O notation on the calculation of the logarithmic function.

For a number that is stored in C or IEEE double format you have:
x=precision * 10^ exponent.
To calculate the log of it : y=log_10(x)=log_10(precision)+exponent.
log_10(precision) and exponent can be calculated in constant time ie O(1).

Log is calculated using approximative polynomials. This involves multiplication. [Taking the exponent of the result must give an acceptable error]
For the precision of numbers stored in C/IEEE format (fixed length) log can be calculated in constant time O(1).
For numbers stored in other formats (variable length) its done somewhere between O(n^2) and O(nlog(n)) at best.
O(n^2): time to multiply two digits of n numbers in a "simple way"
O(nlog(n)): time for a FFT or a binary search (eg checking if the exponent of a new approximation for the log is closer to the original)
 
  • Thanks
Reactions: a6h
For numbers stored in other formats (variable length) its done somewhere between O(nlog(n)) & O(n^2)
O(n^2): time to multiply two digits of n numbers in a "simple way"
O(nlog(n)): time for a FFT.
Just for clarifications:
Those logs all are in base 10, not base e (natural logarithm). I've maid a mistake in my original question. I meant to say common logarithm (base 10), not natural (base e).
I corrected that.
 
Just for clarifications:
Those logs all are in base 10, not base e (natural logarithm). I've maid a mistake in my original question. I meant to say common logarithm (base 10), not natural (base e).
I corrected that.
I'd think recursion would be more efficient. C has a useful API for that. If you try to get fancy and use log equations (doesn't matter which ones) to get sidetracked, that's extra processor cycles every step of the way. Converting between a logarithmic representation of a number and an integer representation of the same number - that's extra computation that is unnecessary in most scenarios. 😩
 
  • Thanks
Reactions: a6h
One thing that always trips me up: When CS types say "log" they don't mean either natural log (i.e., base 2.72) or base 10 log. They mean base 2 log. This is common in big-O analysis. So O(nlog(n)) means n times whatever power you have to raise 2 to get n.
 
Wrong , O(log_10(n))==O(ln_e(n)) , because in the limit when n goes to infinite the ratio remains constant and does not goes to one of the values zero or infinity.
ln is used for base e
log is used for base 10
log base 2 is hardly used expect in specific contexts. The precision of floating points ... CS ...
 
Wrong , O(log_10(n))==O(ln_e(n)) , because in the limit when n goes to infinite the ratio remains constant and does not goes to one of the values zero or infinity.
The ratio of what?
ln is used for base e
log is used for base 10
High school Algebra.
log base 2 is hardly used expect in specific contexts. The precision of floating points ...
Right. Specific contexts like "Computer Science".
 
One thing that always trips me up: When CS types say "log" they don't mean either natural log (i.e., base 2.72) or base 10 log. They mean base 2 log. This is common in big-O analysis. So O(nlog(n)) means n times whatever power you have to raise 2 to get n.
I would have to dig out my college-era textbook to dispute that. The point of 'log n' in Big O analysis is to show a class of computational complexity. A logarithmic function n log (n) has roughly the same shape as n ln (n) or any other logarithmic function - just grows VERY slowly, eventually flattening to a constant as n approaches infinity.
 
Addendum.
When f(x) is the complexity/time for a procedure dependent on a value x
And g(x) is the complexity/time for another procedure dependent on a value x
Then O(f(x)) and O(g(x)) only differ if :
lim x->infty [f(x)/g(x)] is zero or infinity.
If lim x->infty [f(x)/g(x)] is a rational number then the O numbers O(f(x)) & O(g(x)) are considered equal.
 
Ah, I've heard that as you ignore constant factors in big-O notation. Since log2(x) = 1/log10(2) * log10(x), which is just 0.30103 * log10(x) the two expressions are equivalent. This is only true for big-O notation, however.

It's still easier to do big-O analysis if you remember logs are base-2 in CS.
 
One thing that always trips me up: When CS types say "log" they don't mean either natural log (i.e., base 2.72) or base 10 log
Interesting enough, whenever I think of logarithm, base e comes to my mind, and when I type (not write) it, I mix them up, and have to go back and fix my error!
 
Interesting enough, whenever I think of logarithm, base e comes to my mind, and when I type (not write) it, I mix them up, and have to go back and fix my error!
I guess I'm just old. In school "log" meant base-10 log, when they were still useful in the time before calculators. Natural log was always "ln". No other bases were explicitly taught.
 
Back
Top