I’m looking for a number domain where an update like x += z is not necessarily or 100% sure to be reversible. That given when x and z are different variables and also when knowing the result of the update.
You’re going to have to explain what you mean a bit more, because there aren’t any languages that allow for reversible programming outside of a very few, very limited (and mainly academic) experiments.
Thanks for your answer. It’s within reversible arithmetic and is endeed an academic question, so there is no specific language connected to it. As I see it, there would be no integers that should do the trick as a variable. I can’t seem to figure out if i update a variable with the x += z, that x -= z should not could reverse that opperation.
This is way beyond what little knowledge I have of it, and if it’s purely maths then I would say this is the wrong forum? It’s a little bit confusing when you say it’s just maths, then talk about updating variables
If it is programming, you can’t possibly guarantee it unless you change how computers actually currently work (which seems to be the main point of research into reversible programming afaics)
True, I’m looking for a number domain where this will hold, but seems it is also out of my knowledge to understand how this works as well. As far as I know there are little reversible languages and that the x += z in reality means x = x + z or am i wrong?, so in that case a reversible update after x is added to z we can then reversible subtract it with x -= z
Yeah this is a total math-question and has nothing to do with programming.
You could in programming create a class and define how addition and subtraction work on it’s objects and just write something useless in order to achieve this - but that’s propably not what you are looking for?
Like, you can define addition normal but replace “-” with multiplication. But then you would just work around the symbols instead of definitions.
If you are looking for a number-domain… I mean you could create one that doesn’t allow addition in the first place, like one that only contains the number 1. You cannot reverse a undefined calculation.
But I feel like subtraction is defined as the reverse-function to addition and given addition is a linear function, it’s per definition always reversible. Dunno how you could find a number-domain where it wouldn’t work, without just downright defining a different addition-function.
In programming though, it’s not doing anything fancy like programming tricks that say that subtraction is multiplication or anything like that. It isn’t the reverse except in the abstract mathematical sense (unless you store every single thing that ever happened in the program + afaics some metastructure that understands the language at some higher level so that it can rerun computations).
So x += z. So say x has been assigned the value of 1 and z has been assigned the value of 2. So mathematically that’s 1 + 2. And x is assigned the value 3. Now x -= z. Mathematically, that’s 3 - 2, x is assigned the value 1. So in that way it’s reversible.
But this isn’t mathematics, it’s something running on a physical system. And it is subject to real, actual physical laws, it’s not an abstract model. The first value (representing the number 1) and the second value (representing the number 1) aren’t the same, one isn’t a reverse of the other, computers can’t make that abstract leap that humans make.
Practically that’s fine, who cares if it’s actually reversible most of the time. However, it seems to be something a fair amount of people feel is important, but needs completely different chip architectures to do it practically afaik.
Edit: and the fact of the mathematical values being the same in this case: that’s not the key thing. It’s the computation itself that has to be reversible. You, as a person, can look at the the example and say one is the reverse of the other, but it’s not a person running the code, the computer doesn’t know that. So:
x += z
How can that be reversed? Not like this:
x -= z
I haven’t reversed anything. I’ve written a second line of code, knowing, as a person reading the code, what the first line did. To be reversible, the computer needs to backtrack within the program, be able to run:
z =+ x
To go back, temporally (NB variable assignment shorthand seems to be a bad example here, I’m not sure that could ever work, probably needs some functional language constructs). This is literally impossible in any existing non-experimental language. For one thing, the semantics are going to be mind-bendingly weird: how can you make a language that can run programs forwards or backwards? If you can do that, how does the programmer denote they’re going to reverse, what do functions and control structures look like? You can’t have functions that just take input and return output: that doesn’t work in this model. You can’t have loops that just iterate, because you need to completely invert them when you go back. You can’t just store everything that happened, because that doesn’t allow you to reverse computations.
I have read this thread over a couple times, and am still a bit confused as to the goal of the question subject.
Although, it does remind me of a Symbolic toolbox (library) I have used, where it is notationally trivial to reverse a procedure. That is, the expressions are stored as symbolic relationships, and, therefore, provide a more consistent way to represent operations, as they are not compiled into a numeric value (no floating-point headache).
But, if you are looking for a language/program that reads/runs in reverse, then I am clueless
Yeah I only really have a very basic understanding from a programming PoV (no idea re the maths) and the basic reasons why it’s attractive to researchers (seems to get around some physical limitations of current computer architecture). And what OP posted is surely practical programming semantics, it’s not maths – it’s variable assignment (and shorthand at that), and one isn’t the reverse of the other, it’s only the reverse in the sense that if I read it written down I understand one sort of undoes the other
The more I think about it, the less I can see how it’s even vaguely possible with existing language semantics outside of tiny contrived scenarios (or very specific controlled scenarios) – it’s doing my head in, so I should probably stop, but the more I break down what’s needed for even completely basic things that exist in existing, normal languages the more bizarre it gets
I need to stop thinking about this, but say I have variable a and variable b. I have a function add that takes two arguments and returns the sum. a = 5, b = 5, c = add(a, b).
So c is 10. Now go back. (10)dda? I dunno, opposite of add takes a result and produces two values. So what are they? 5 and 5? Or 4 and 6? Or 9 and 1. There are infinite options, only way around seems to be store what happened, which isn’t reversing things. And what’s a? And what’s b? So like, I don’t think can have functions that take more than one argument and I don’t see how can have variables. I’m gone here…probably need to go to bed.
Thanks a lot for your answers and help DanCouper, it’s really appreciated
I found what I consider an answer to the reversible procedure of computation in this video Reversible Computing - YouTube
It just don’t currently give the answer to what number domain still should not be garanteed to be reversible. I don’t know if this is realated more to number types, it is still a arithmetic question, but fair to say I’m as confussed as you are.
Hi JeremyLT. I think you’re right as I read further too in the article, but the questions is still about number types for reversible arithmetic, maybe this i a mistake. An update like x += z should still not be give reversibility 100%, which I can’t seem to figure out for a given number domain.