Alternative to elif for many cases using value comparison

From chap 3 ex 3 of ‘Py4e’ book:-

try:
   inp = float(input("Enter score: "))
except:
   print('Bad score')
   quit()
if not 0 < inp < 1:
   print('numbers between 0 and 1 only please')
   quit()
elif inp >= 0.9:
   grade = 'A'
elif inp >= 0.8:
   grade = 'B'
elif inp >= 0.7:
   grade = 'C'
elif inp >= 0.6:
   grade = 'D'
else:
   grade = 'F'
print(grade)

There must be a more elegant way of testing instead of all those elif statements. What if there were 1000 conditions of the type ‘inp >= n’ to test for? The processor load would be huge. If the test cases were of the form ‘elif inp == n’, then a dictionary would do, but it’s testing for >=.
Does python have a way to do this, or would I need pandas dataframes?

Not really? I’m not sure why you think there would be an extra ‘processor load’ for that.

In any case, I’m not aware of an alternative, but I’m also not aware of a use case for this. There aren’t a ton of places where you need to write a 1000 case chained if-else, especially not in Python.

I’m not sure of the expression, but I think its called ‘quantizing’; ie looking at a value and working out what range of values it fits within, then assigning an appropriate identifier to it. I wrote something like that years ago in vb.net, to assign a musical note value to an incoming musical pitch. There were about 70 plus notes to look up. Can’t find the code, can’t remember how I did it, it was that long ago!

I’m assuming the processor load would be high in comparison to, say, looking the value up in a look up table or an excel file using the same comparative test.

I’m considering writing something similar using python, but looking at rgb values of colours, and breaking those down into maybe 2-300 categories.

‘Processor load’ isn’t really the correct term here.

Anywho, if you have a formula that gives you a single value from the range or can reduce the range in some way, then you can use that. But if you have a massive number irregularly shaped bins, you’re stuck until you redesign your problem.

Thanks Jeremy, I’ve got something that works for the PY4E book code:

try:
    inp = float(input("Enter score: "))
except:
    print('Bad score')
    quit()
if not 0 < inp < 1:
    print('numbers between 0 and 1 only please')
    quit()
inp = int(inp * 10) / 10
inp = 0.5 if inp < 0.6 else inp
dic = {0.9: 'A', 0.8: 'B', 0.7: 'C', 0.6: 'D', 0.5: 'F'}
print(dic[inp])

Would this be faster than using elifs? I keep meaning to write a speed test module, not got around to it yet.

Hopefully when it comes to it I can work out how to do something similar with a much larger dictionary, if that’s a good way to do it.

People do a lot of premature hand-wringing about optimization. In this case, I don’t know that there would be a massive difference, but the only way to know would be do benchmark.

In general, clearer code is preferable and you should only trade clarity for speed when you know that you are changing a portion of the code that is a performance bottleneck.

2 Likes

I think you’re right about optimization. At some point I’d like to rewrite the VB code in python, and I’m interested in AI, which also requires efficient code, but generally I think I’m not going to need to make too much of a fuss about it. Mostly I’m just playing about with code to try and make it more compact, without sacrificing readability very much.

Don Knuth says premature optimization is the root of all evil. That said, some back of the envelope discussion to demonstrate why in this case:

We typically measure processor performance in FLOPS (floating point operations/sec). In reality, that’s GFLOPS for consumer grade equipment these days, and TFLOPS or PFLOPS for high performance equipment. Given superscalar processors, you can assume one instruction will be retired per clock cycle, so clock speed gives you a rough measure of performance, e.g., a 3.4 GHz processor will perform at ~3.4 GFLOPS. With multi-core, SIMD chips, and so on, performance will generally be higher, but that’s irrelevant for the moment.

So why FLOPS and not some other instruction? That’s because floating point operations take the longest to execute (usually a handful of cycles) so they place an upper bound on processor performance. On the other hand, test-and-branch operations like those in your if/else ladder execute in a single clock. So you on our example 3.4 GHz processor you should be able to execute a 3.4 billion option if/else ladder in about a second. That’s not much of a performance issue until you get to really big data sets.

As a general comment, quantization algorithms are definitely what you want to be looking for – have you googled? There are many. This is a common problem in digital signal processing.

Lastly, regarding efficiency and compactness – these are far from the same thing. For instance, executing an if/else ladder is basically a linear search. That’ll be O(n) in the limit. But there’s no guarantee dictionary lookup is algorithmically faster. It probably is, because it’s probably a tree search of some sort, so O(n log n) in the limit. But if it’s coded poorly, it might still be linear. So you might just lose clarity with no attendant gain in performance. And as mentioned above, you won’t see that gain until the asymptotics kick in, which takes a big data set.

jrm

2 Likes

Then you’d need to find a way to first up “define” 1000 conditions. Which will propably be a huge mess in itself?

If you got 1000 conditions to deal with, you hopefully have a shortcut to calculate those.
Like, a way-to-complex “shortcut” here would be to take the input and calculate how much letter it should move from A. If grades follow a continous system, you could easily get 26 cases with this approach (then you run out of letters).

However first off, you don’t use Python for speed. Like, that’s completly the wrong language for that.
Second off, as mentioned before, readability is much more important. Code is much more often read than written - so optimizing it for reading is preferred.
Third off, ofcourse if you deal with 1000 cases, “reading” is going to be a whole new challenge.

Yes, I didn’t make it all that clear, I meant 1000 possible ranges of value, which could be tested maybe with some mathematical equation. At the time I couldn’t remember how I did this with my VB code, so I didn’t put all that into my initial post. I think I could probably work it out again if I had to.

I understand Python isn’t exactly known for being a fast executing language, I’ve done some arduino coding with C, I’d probably use that if I had to do something super fast, although it would mean relearning something I haven’t used for maybe 10 years. Pythons a lot easier to work with though!

Thanks, that’s a very comprehensive answer. There’s a lot there that I’d like to know more about; I think it would be a good direction to look into. Youtube seems to be a good palce to start generally.

Definitly. Also thanks to modern machines, even “slow” code is no problem - we got the power to spare. Plus Python has tons of libraries which might offer some fast solutions for complex problems. And these are propably written in C, because that’s the language of the Python Interpreter.

And if that’s not enough, there is a CPython library for python with which you can basically compile Python code as if it was C. You gotta do some adjustmenst ofcourse, like use actual typing.

Or switch to Julia, which is very similar to Python but also getting compiled and thus is faster. But I haven’t looked into it all that much.

But yeah, there are a couple of options to gain speed, before having to a completly different language ^^

1 Like

Julia is a good fit for something more math heavy. I would think real hard before using C… It’s a fine language but there are more ergonomic and modern options out there.

1 Like

Again, beware premature optimization. Until you’ve written the code and profiled it, you don’t really have a reason to abandon an expressive language with lots of support for numerics for a low-level systems language that will make the code more difficult to write.

Also, having 1000 quantization “buckets” is a non-issue. The real issue is the size of the data you’re discretizing into those 1000 buckets.

jrm

In reply to all the above 3 replies;

I’ll look into ‘julia’, not heard of it before. Part of the appeal of python for me is the level of support, and the great range of modules available. It’s a good platform to learn OOP and other stuff I haven’t used before, also AI.

I find low level languages fascinating; I started out moving from analogue to digital technician work decades ago, and learning about logic gates, registers, machine code, assembly etc was very interesting, but has left me with a misplaced need to be able to visualize everything in terms of memory allocation etc. Not always helpful, especially as the details are very hazy now.

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.