> "Modern" languages try to avoid exceptions by using sum types and pattern matching plus lots of sugar to make this bearable. I personally dislike both exceptions and its emulation via sum types. ... I personally prefer to make the error state part of the objects: Streams can be in an error state, floats can be NaN and integers should be low(int) if they are invalid.
Special values like NaN are half-assed sum types. The latter give you compiler guarantees.
SJMG 1 days ago [-]
Not a defense of the poison value approach, but in this thread Araq (Nim's principal author) lays out his defense for exceptions.
I’d like to see their argument for it. I see no help in pushing NaN as a number through a code path corrupting all operations it is part of, and the same is true for the others.
snek_case 1 days ago [-]
The reason NaN exists is for performance AFAIK. i.e. on a GPU you can't really have exceptions. You don't want to be constantly checking "did this individual floating-point op produce an error?" It's easier and faster for the individual floating point unit to flag the output as a NaN. Obviously NaNs long predate GPUs, but floating-point support was also hardware accelerated in a variety of ways for a long time.
That being said, I agree that the way NaNs propagate is messy. You can end up only finding out that there was an error much later during the program's execution and then it can be tricky to find out where it came from.
beagle3 1 days ago [-]
The alternative is checking the result of every operation; or use “signaling NaNs” that raise an exception on a (properly configured) scalar operation on a CPU. As soon as non scalar code is involved - SIMD or GPU, quiet NaNs with strategically placed explicit tests along the computation becomes the only reasonable/efficient option.
cb321 2 days ago [-]
There is no direct argument/guidence that I saw for "when to use them", but masked arrays { https://numpy.org/doc/stable/reference/maskedarray.html } (an alternative to sentinels in array processing sub-languages) have been in NumPy (following its antecedents) from its start. I'm guessing you could do a code-search for its imports and find arguments pro & con in various places surrounding that.
From memory, I have heard "infecting all downstream" as both "a feature" and "a problem". Experience with numpy programs did lead to sentinels in the https://github.com/c-blake/nio Nim package, though.
Another way to try to investigate popularity here is to see how much code uses signaling NaN vs. quiet NaN and/or arguments pro/con those things / floating point exceptions in general.
I imagine all of it comes down to questions of how locally can/should code be forced to confront problems, much like arguments about try/except/catch kinds of exception handling systems vs. other alternatives. In the age of SIMD there can be performance angles to these questions and essentially "batching factors" for error handling that relate to all the other batching factors going on.
Today's version of this wiki page also includes a discussion of Integer Nan: https://en.wikipedia.org/wiki/NaN . It notes that the R language uses the minimal signed value (i.e. 0x80000000) of integers for NA.
To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view.
kace91 2 days ago [-]
>To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view.
That's fair, I wasn't dimsissing the practice but rather just commenting that it's a shame the author didn't clarify their preference.
I don't think the popularity angle is a good proxy for usefulness/correction of the practice. Many factors can influence popularity.
Performance is a very fair point, I don't know enough to understand the details but I could see it being a strong argument. It is counter intuitive to move forward with calculations known to be useless, but maybe the cost of checking all calculations for validity is larger than the savings of skipping early the invalid ones.
There is a catch though. Numpy and R are very oriented to calculation pipelines, which is a very different usecase to general programming, where the side effects of undetected 'corrupt' values can be more serious.
cb321 1 days ago [-]
The conversation around Nim for the past 20 years has been rather fragmented - IRC channels, Discord channels (dozens, I think), later the Forum, Github issue threads, pull request comment threads, RFCs, etc. Araq has a tendency to defend his ideas in one venue (sometimes quite cogently) and leave it to questioners to dig up where those trade-off conversations might be. I've disliked the fractured nature of the conversation for the 10 years I've known about it, but assigned it to a kind of "kids these days, whachagonnado" status. Many conversations (and life!) are just like that - you kind of have to "meet people where they are".
Anyway, this topic of "error handling scoping/locality" may be the single most cross-cutting topic across CPUs, PLangs, Databases, and operating systems (I would bin Numpy/R under Plangs+Databases as they are kind of "data languages"). Consequently, opinions can be very strong (often having this sense of "Everything hinges on this!") in all directions, but rarely take a "complete" view.
If you are interested in "fundamental, not just popularity" discussions, and it sounds like you are, I feel like the database community discussions are probably the most "refined/complete" in terms of trade-offs, but that could simply be my personal exposure, and DB people tend to ignore CPU SIMD because it's such a "recent" innovation (hahaha, Seymore Cray was doing it in the 1980s for the Cray-3 Vector SuperComputer). Anyway, just trying to help. That link to the DB Null page I gave is probably a good starting point.
otabdeveloper4 19 hours ago [-]
There is no argument. It's literally just a "programming is hard, let's go shopping" sentiment.
elcritch 2 days ago [-]
The compiler can still enforce checks, such as with nil checks for pointers.
In my opinion it’s overall cleaner if the compiler handles enforcing it when it can. Something like “ensure variable is initialized” can just be another compiler check.
Combined with an effects system that lets you control which errors to enforce checking on or not. Nim has a nice `forbids: IOException` that lets users do that.
ux266478 2 days ago [-]
Both of these things respectively are just pattern matches and monads, just not user-definable ones.
xigoi 12 hours ago [-]
On the other hand, it’s more ergonomic and readable because you don’t need to declare a new name.
if name != nil:
echo name
versus
case name
of Some(unwrappedName):
echo unwrappedName
umanwizard 2 days ago [-]
> The compiler can still enforce checks, such as with nil checks for pointers.
Only sometimes, when the compiler happens to be able to understand the code fully enough. With sum types it can be enforced all the time, and bypassed when the programmer explicitly wants it to be.
wavemode 2 days ago [-]
There's nothing preventing this for floats and ints in principle. e.g. the machine representation could be float, but the type in the eyes of the compiler could be `float | nan` until you check it for nan (at which point it becomes `float`). Then any operation which can return nan would return `float | nan` instead.
tbh this system (assuming it works that way) would be more strict at compile-time than the vast majority of languages.
Mond_ 1 days ago [-]
This is a bit confused. You're saying `float`, but a float comes with NaN by default. Any float can take NaN values.
If you actually want the compiler to check this on the level of the type system, it'd have to be `NonNaNFloat | NaN`. Then you can check which one you have and continue with a float that is guaranteed to not be NaN.
But (importantly) a NonNaNFloat is not the same as a float, and this distinction has to be encoded in the type system if you want to take this approach seriously. This distinction is NOT supported by most type systems (including Rust's std afaik, fwiw). It's similar to Rust's NonZero family of types.
wavemode 1 days ago [-]
You keep talking about Rust, but I'm not referring to Rust. This thread is discussing a (hypothetical, as-yet undeveloped) type system for a new version of Nim.
Hypothetically, no, the float type would not admit NaNs. You would be prevented from storing NaNs in them explicitly, and operations capable of producing NaNs would produce a `float | nan` type that is distinct from float, and can't be treated like float until it's checked for NaN.
And I'm not sure why it's being discussed as though this is some esoteric language feature. This is precisely the way non-nullable types work in languages like Kotlin and TypeScript. The underlying machine representation of the object is capable of containing null values, yes, but the compiler doesn't let you treat it as such (without certain workarounds).
Mond_ 20 hours ago [-]
Huh? Rust is just an example here. What I am saying is that you're just redefining float to mean NonNaNFloat.
This is fine, I guess, but it will cause a bunch of problems since e.g. Division of two floats has to be able to return NaNs. At that point you either need to require a check to see if the value is NaN (inconvenient and annoying) or allow people to just proceed. Not sure I am exactly sold on this so far.
wavemode 13 hours ago [-]
Nobody's trying to sell you on anything. You again seem to be out-of-context with respect to the discussion being had.
The parent commenter stated that sum types work differently from a hypothetical float / NaN split, because compilers can't always "understand the code fully enough" to enforce checks. I simply responded that that is not true in principle, since you could just treat non-nan floats the same way that many languages treat non-null types.
Indeed, everything you're describing about non-nan floats applies equally to sum types - you can't operate on them unless you pattern match. You're highlighting the exact point I'm trying to make!
The fact that you consider this system "inconvenient", is entirely irrelevant to this discussion. Maybe the designer of Nim simply cares more about NaN-safety than you or I do. Who knows. Regardless, the original statement (that
sum types and non-nan floats can't work the same way) is incorrect.
Unfortunately, Rust doesn't seem to be smart enough to represent `Option<NotNan<f64>>` in 8 bytes, even though in theory it should be possible (it does the analogous thing with `Option<NonZero<u64>>`).
Yeah, I'm not sure I've ever seen NaN called or as an example to be emulated before, rather than something people complain about.
echelon 1 days ago [-]
Holy shit, I'd love to see NaN as a proper sum type. That's the way to do it. That would fix everything.
ameliaquining 1 days ago [-]
I suspect that this would result in a lot of .unwrap() calls or equivalent, and people would treat them as line noise and find them annoying.
An approach that I think would have most of the same correctness benefits as a proper sum type while being more ergonomic: Have two float types, one that can represent any float and one that can represent only finite floats. Floating-point operations return a finite float if all operands are of finite-float type, or an arbitrary float if any operand is of arbitrary-float type. If all operands are of finite-float type but the return value is infinity or NaN, the program panics or equivalent.
(A slightly more out-there extension of this idea: The finite-float type also can't represent negative zero. Any operation on finite-float-typed operands that would return negative zero returns positive zero instead. This means that finite floats obey the substitution property, and (as a minor added bonus) can be compared for equality by a simple bitwise comparison. It's possible that this idea is too weird, though, and there might be footguns in the case where you convert a finite float to an arbitrary one.)
saghm 8 hours ago [-]
> I suspect that this would result in a lot of .unwrap() calls or equivalent, and people would treat them as line noise and find them annoying.
I was thinking about this the other day for integer wrapping specifically, given that it's not checked in release mode for Rust (by default at least, I think there's a way to override that?). I suspect that it's also influenced by the fact that people kinda expect to be able to use operators for arithmetic, and it's not really clear how to deal with something like `a + b + c` in a way where each step has to be fallible; you could have errors propagate and then just have `(a + b + c)?`, but I'm not sure that would be immediately intuitive to people, or you could require it to be explicit at each step, e.g. `((a + b)? + c))?`, but that would be fairly verbose. The best I could come up with is to have a macro that does the first thing, which I imagine someone has probably already written before, where you could do something like `checked!(a + b + c)`, and then have it give a single result. I could almost imagine a language with more special syntax for things having a built-in operator for that, like wrapping it in double backticks or something rather than `checked!(...)`.
aw1621107 23 hours ago [-]
> Have two float types, one that can represent any float and one that can represent only finite floats. Floating-point operations return a finite float if all operands are of finite-float type, or an arbitrary float if any operand is of arbitrary-float type. If all operands are of finite-float type but the return value is infinity or NaN, the program panics or equivalent.
I suppose there's precedent of sorts in signaling NaNs (and NaNs in general, since FPUs need to account for payloads), but I don't know how much software actually makes use of sNaNs/payloads, nor how those features work in GPUs/super-performance-sensitive code.
I also feel that as far as Rust goes, the NonZero<T> types would seem to point towards not using the described finite/arbitrary float scheme as the NonZero<T> types don't implement "regular" arithmetic operations that can result in 0 (there's unsafe unchecked operations and explicit checked operations, but no +/-/etc.).
ameliaquining 21 hours ago [-]
Rust's NonZero basically exists only to enable layout optimizations (e.g., Option<NonZero<usize>> takes up only one word of memory, because the all-zero bit pattern represents None). It's not particularly aiming to be used pervasively to improve correctness.
The key disanalogy between NonZero and the "finite float" idea is that zero comes up all the time in basically every kind of math, so you can't just use NonZero everywhere in your code; you have to constantly deal with the seam converting between the two types, which is the most unwieldy part of the scheme. By contrast, in many programs infinity and NaN are never expected to come up, and if they do it's a bug, so if you're in that situation you can just use the finite-float type throughout.
aw1621107 21 hours ago [-]
> By contrast, in many programs infinity and NaN are never expected to come up, and if they do it's a bug, so if you're in that situation you can just use the finite-float type throughout.
I suppose that's a fair point. I guess a better analogy might be to operations on normal integer types, where overflow is considered an error but that is not reflected in default operator function signatures.
I do want to circle back a bit and say that my mention of signaling NaNs would probably have been better served by a discussion of floating point exceptions more generally. In particular, I feel like existing IEEE floating point technically supports something like what you propose via hardware floating point exceptions and/or sNaNs, but I don't know how well those capabilities are actually supported (e.g., from what I remember the C++ interface for dealing with that kind of thing was clunky at best). I want to say that lifting those semantics into programming languages might interfere with normally desirable optimizations as well (e.g., effectively adding a branch after floating point operations might interfere with vectorization), though I suppose Rust could always pull what it did with integer overflow and turn off checks in release mode, as much as I dislike that decision.
lairv 1 days ago [-]
That's why I always disliked calling null the "billion dollar mistake", null and Options<T> are basically the same, the mistake is not checking it at compile time
the_gipsy 1 days ago [-]
...and if everything was wrapped in Option<>.
If my grandmother had wheels, she'd be a bike.
kbd 2 days ago [-]
The biggest thing I still don’t like about Nim is its imports:
import std/errorcodes
proc p(x: int) {.raises.} =
if x < 0:
raise ErrorCode.RangeError
use x
I can’t stand that there’s no direct connection between the thing you import and the names that wind up in your namespace.
PMunch 2 days ago [-]
There is a direct connection, you just don't have to bother with typing it. Same as type inference, the types are still there, you just don't have to specify them. If you have a collision in name and declaration then the compiler requires you to specify which version you wanted. And with language inspection tools (like LSP or other editor integration) you can easily figure out where something comes from if you need to. Most of the time though I find it fairly obvious when programming in Nim where something comes from, in your example it's trivial to see that the error code comes from the errorcodes module.
Oh, and as someone else pointed out you can also just `from std/errorcodes import nil` and then you _have_ to specify where things come from.
kbd 1 days ago [-]
When I was learning Nim and learned how imports work and that things stringify with a $ function that comes along with their types (since everything is splat imported) and $ is massively overloaded I went "oh that all makes sense and works together". The LSP can help figure it out. It still feels like it's in bad taste.
It's similar to how Ruby (which also has "unstructured" imports) and Python are similar in a lot of ways yet make many opposite choices. I think a lot of Ruby's choices are "wrong" even though they fit together within the language.
beagle3 1 days ago [-]
Do note that unlike Python’s “import * from a; import * from b” where you have no idea where a name cam from later in the code (and e.g. changes to a and b, such as new versions, will change where a name comes from), Nim requires a name to be unambiguous, so that if “b” added a function that previously only “a” had, you’ll get a compile time error.
xigoi 2 days ago [-]
It needs to be this way so that UFCS works properly. Imagine if instead of "a,b".split(','), you had to write "a,b".(strutils.split)(',').
polotics 2 days ago [-]
ok I do not understand.
What is preventing this
import std/errorcodes
from allowing me to use:
raise errorcodes.RangeError
instead of what Nim has?
or even why not even "import std/ErrorCodes"
and having the plural in ErrorCodes.RangeError I wouldn't mind
PMunch 2 days ago [-]
Nothing, and it fact this works. To move to an example which actually compiles:
import math
echo fcNormal
echo FloatClass.fcNormal
echo math.fcNormal
echo math.FloatClass.fcNormal
All of these ways of identifying the `fcNormal` enum value works, with varying levels of specificity.
If instead you do `from math import nil` only the latter two work.
treeform 2 days ago [-]
Nim imports are great. I would hate to qualify everything. It feels so bureaucratic when going back to other languages. They never cause me issues and largely transparent. Best feature.
summarity 2 days ago [-]
You are free to import nil and type the fully qualified name.
Symmetry 2 days ago [-]
There are many things to like about Nim, but it does benefit from adherence to a style guide more than most languages.
ThouYS 1 days ago [-]
100% my beef with it. same style as c++ where you never know where something comes from, when clangd starts throwing one of its fits
cb321 1 days ago [-]
PMunch and summarity both already said this, but because maybe code speaks louder than words (like pictures?)... This works:
from strutils as su import nil
echo su.split "hi there"
(You can put some parens () in there if you like, but that compiles.) So, you can do Python-style terse renames of imports with forced qualification. You just won't be able to say "hi there".(su.split) or .`su.split` or the like.
You can revive that, though, with a
template suSplit(x): untyped = su.split x
echo "hi there".suSplit`
That most Nim code you see will not do this is more a cultural/popularity thing that is kind of a copy-paste/survey of dev tastes thing. It's much like people using "np" as the ident in `import numpy as np`. I was doing this renaming import before it was even widely popular, but I used capital `N` for `numpy` and have had people freak out at me for such (and yet no one freaking out at Travis for not just calling it `np` in the first place).
So, it matters a little more in that this impacts how you design/demo library code/lib symbol sets and so on, but it is less of a big deal than people make it out to be. This itself is much like people pretending they are arguing about "fundamental language things", when a great deal of what they actually argue about are "common practices" or conventions. Programming language designers have precious little control over such practices.
fithisux 1 days ago [-]
Java is going to do the same. C already does is.
Not the best but there is precedent.
mwkaufma 5 days ago [-]
Big "college freshman" energy in this take:
I personally prefer to make the error state part of the objects: Streams can be in an error state, floats can be NaN and integers should be low(int) if they are invalid (low(int) is a pointless value anyway as it has no positive equivalent).
It's fine to pick sentinel values for errors in context, but describing 0x80000000 as "pointless" in general with such a weak justification doesn't inspire confidence.
ratmice 2 days ago [-]
Without the low int the even/odd theorem falls apart for wrap around
I've definitely seen algorithms that rely upon that.
I would agree, whether error values are in or out of band is pretty context dependent
such as whether you answered a homework question wrong, or your dog ate it.
One is not a condition that can be graded.
Mond_ 1 days ago [-]
Meh, you also see algorithms that have subtle bugs because the author assumed that for every integer x, -x has the same absolute value and opposite sign.
I view both of these as not great. If you strictly want to rely on wraparound behavior, ideally you specify exactly how you're planning to wrap around in the code.
umanwizard 2 days ago [-]
What is the "even/odd theorem" ?
ratmice 2 days ago [-]
that all integers are either even or odd, and that for an even integer that integer + 1 and - 1 are odd and vice versa for odd numbers. That the negative numbers have an additional digit from the positive numbers ensures that low(integer) and high(integer) have different parity. So when you wrap around with overflow or underflow you continue to transition from an even to odd, or odd to even.
xigoi 2 days ago [-]
If you need wraparound, you should not use signed integers anyway, as that leads to undefined behavior.
ratmice 2 days ago [-]
Presumably since this language isn't C they can define it however they want to, for instance in rust std::i32::MIN.wrapping_sub(1) is a perfectly valid number.
xigoi 2 days ago [-]
Nim (the original one, not Nimony) compiles to C, so making basic types work differently from C would involve major performance costs.
beagle3 1 days ago [-]
And yet, Nim does overflow checking by default.
umanwizard 2 days ago [-]
Signed overflow being UB (while unsigned is defined to wrap) is a quirk of C and C++ specifically, not some fundamental property of computing.
Symmetry 2 days ago [-]
Specifically, C comes form a world where allowing for machines that didn't use 2's compliment (or 8 bit bytes) was an active concern.
aw1621107 1 days ago [-]
Interestingly, C23 and C++20 standardized 2's complement representation for signed integers but kept UB on signed overflow.
Asooka 1 days ago [-]
Back when those machines existed, UB meant "the precise behaviour is not specified by the standard, the specific compiler for the specific machine chooses what happens" rather than the modern "a well-formed program does not invoke UB". For what it is worth, I compile all my code with -fwrapv et. al.
aw1621107 23 hours ago [-]
> UB meant "the precise behaviour is not specified by the standard, the specific compiler for the specific machine chooses what happens"
Isn't that implementation-defined behavior?
xigoi 2 days ago [-]
Nim (the original one, not Nimony) compiles to C, so making basic types work differently from C would involve major performance costs.
ratmice 2 days ago [-]
Presumably unsigned want to return errors too?
Edit: I guess they could get rid of a few numbers...
Anyhow it isn't a philosophy that is going to get me to consider nimony for anything.
umanwizard 2 days ago [-]
> making basic types work differently from C would involve major performance costs.
Not if you compile with optimizations on. This C code:
int wrapping_add_ints(int x, int y) {
return (int)((unsigned)x + (unsigned)y);
}
Compiles to this x86-64 assembly (with clang -O2):
wrapping_add_ints:
lea eax, [rdi + rsi]
ret
Which, for those who aren't familiar with x86 assembly, is just the normal instruction for adding two numbers with wrapping semantics.
k__ 1 days ago [-]
I had the impression, the creator of Nim isn't very fond of academic( solution)s.
sevensor 2 days ago [-]
I have been burned by sentinel values every time. Give me sum types instead. And while I’m piling on, this example makes no sense to me:
proc fib[T: Fibable](a: T): T =
if a <= 2:
result = 1
else:
result = fib(a-1) + fib(a-2)
Integer is the only possible type for T in this implementation, so what was the point of defining Fibable?
Hendrikto 2 days ago [-]
I agree about sentinel values. Just return an error value.
I think the fib example is actually cool though. Integers are not the only possible domain. Everything that supports <=, +, and - is. Could be int, float, a vector/matrix, or even some weird custom type (providing that Nim has operator overloading, which it seems to).
May not make much sense to use anything other than int in this case, but it is just a toy example. I like the idea in general.
sevensor 2 days ago [-]
Well, I agree about Fibable, it’s fine. It’s the actual fib function that doesn’t work for me. T can only be integer, because the base case returns 1 and the function returns T. Therefore it doesn’t work for all Fibables, just for integers.
cb321 2 days ago [-]
In this case, it compiles & runs fine with floats (if you just delete the type constraint "Fibable") because the string "1" can be implicitly converted into float(1) { or 1.0 or 1f64 or float64(1) or 1'f64 or ..? }. You can think of the "1" and "2" as having an implicit "T(1)", "T(2)" -- which would also resolve your "doesn't work for me" if you prefer the explicitness. You don't have to trust me, either. You can try it with `echo fib(7.0)`.
Nim is Choice in many dimensions that other PLang's are insistently monosyllabic/stylistic about - gc or not or what kind, many kinds of spelling, new operator vs. overloaded old one, etc., etc., etc. Some people actually dislike choice because it allows others to choose differently and the ensuing entropy creates cognitive dissonance. Code formatters are maybe a good example of this? They may not phrase opposition as being "against choice" as explicitly as I am framing it, but I think the "My choices only, please!" sentiment is in there if they are self-aware.
sevensor 1 days ago [-]
But given the definition of Fibable, it could be anything that supports + and - operators. That could be broader than numbers. You could define it for sets for example. How do you add the number 1 to the set of strings containing (“dog”, “cat”, and “bear”)? So I suppose I do have a complaint about Fibable, which is that it’s underconstrained.
Granted, I don’t know nim. Maybe you can’t define + and - operators for non numbers?
cb321 1 days ago [-]
Araq was probably trying to keep `Fibable` short for the point he was trying to make. So, your qualm might more be with his example than anything else.
You could add a `SomeNumber` predicate to the `concept` to address that concern. `SomeNumber` is a built-in typeclass (well, in `system.nim` anyway, but there are ways to use the Nim compiler without that or do a `from system import nil` or etc.).
Unmentioned in the article is a very rare compiler/PLang superpower (available at least in Nim 1, Nim 2) - `compiles`. So, the below will print out two lines - "2\n1\n":
when compiles SomeNumber "hi": echo 1 else: echo 2
when compiles SomeNumber 1.0: echo 1 else: echo 2
Last I knew "concept refinement" for new-style concepts was still a work in progress. Anyway, I'm not sure what is the most elegant way to incorporate this extra constraint, but I think it's a mistake to think it is unincorporatable.
To address your question about '+', you can define it for non-SomeNumber, but you can also define many new operators like `.+.` or `>>>` or whatever. So, it's up to your choice/judgement if the situation calls for `+` vs something else.
sevensor 1 days ago [-]
That’s fair. Sounds like the example was composed in haste and may not do the language justice.
cb321 1 days ago [-]
I think the example was chosen only for familiarity and is otherwise not great. Though it was the familiarity itself that probably helped you to so easily criticize it. So, what do I know? :-)
FWIW, the "catenation operator" in the Nim stdlib is ampersand `&`, not `+` which actually makes it better than most PLangs at visually disambiguating things like string (or other dynamic array, `seq[T]` in Nim) concatenation from arithmetic. So, `a&b` means `b` concatenated onto the end of `a` while `a+b` is the more usual commutative operation (i.e. same as `b+a`). Commutativity is not enforced by the basic dispatch on `+`, though such might be add-able as a compiler plugin.
Mostly, it's just a very flexible compiler / system.. like a static Lisp with a standard surface syntax closer to Python with a lot of parentheses made optional (but I think much more flexible and fluid than Python). Nim is far from perfect, but it makes programming feel like so much less boilerplate ceremony than most alternatives and also responds very well to speed/memory optimization effort.
sevensor 1 days ago [-]
Thanks for the discussion! I know a lot more about nim than I did this morning.
Hendrikto 2 days ago [-]
I see, I misunderstood your complaint then.
However, the base case being 1 does not preclude other types than integers, as cb321 pointed out.
jibal 1 days ago [-]
You're completely missing the point of this casual example in a blog post ... as evidenced by the fact that you omitted the type definition that preceded it, that is the whole point of the example. That it's not the best possible example is irrelevant. What is relevant is that the compiler can type check the code at the point of definition, not just at the point of instantiation.
And FWIW there are many possible types for T, as small integer constants are compatible with many types. And because of the "proc `<=`(a, b: Self): bool" in the concept definition of Fibable, the compiler knows that "2" is a constant of type T ... so any type that has a conversion proc for literals (remember that Nim has extensive compile-time metaprogramming features) can produce a value of its type given "2".
treeform 2 days ago [-]
There can be a lot of different integers, int16, int32 ... and unsigned variants. Even huge BigNum integers of any lengths.
esafak 2 days ago [-]
From my interaction with the Nim community, I came to the conclusion that nim could be more popular if its founder devolved decision making to scale up the community. I think he likes it the way it is; small, but his. He is Torvaldsesque in his social interactions.
nallerooth 2 days ago [-]
I feel the same way - as I suspect a lot of people here do. Nim posts are always upvoted and usually people say nice things about the language in the comments.. but there are few who claim to actually -use- the language for more than a small private project, if even that.
cb321 1 days ago [-]
The only way to really test out a programming language is by trying it out or reading how someone else approached a problem that you're interested in/know about.
There are over 2200 nimble packages now. Maybe not an eye-popping number, but there's still a good chance that somewhere in the json at https://github.com/nim-lang/packages you will find something interesting. There is also RosettaCode.org which has a lot of Nim example code.
This, of course, does not speak to the main point of this subthread about the founder but just to some "side ideas".
oscillonoscope 2 days ago [-]
I worked in nim for a little bit and it truly has a lot of potential but ultimately abandoned it for the same reason. It's never going to grow beyond the founder's playground.
xigoi 2 days ago [-]
Please no. Design by committee would lead to another C++.
pjmlp 1 days ago [-]
Languages with design by committee are a plenty, including all mainstream ones, not a single one is still being developed by a single person.
almostgotcaught 2 days ago [-]
The second or third most popular language of all time? God forbid lol
xigoi 2 days ago [-]
Popular does not mean good. Tobacco smoking is also popular.
almostgotcaught 2 days ago [-]
Do you think this is clever? For a metaphor to be relevant to a discussion it has to be fitting, not just a dunk.
xigoi 2 days ago [-]
It’s not a metaphor. I was giving a counterexample to your implied claim that popularity is an indicator of quality.
kanaffa12345 2 days ago [-]
That wasn't an implied claim because we're not discussing metrics for judging quality.
venturecruelty 1 days ago [-]
You're right. It's everyone's least-favorite gotcha. Reminds me of this:
Waiter: "How is everything?"
Customer: "Great!"
Waiter, disgusted: "Even war?"
andyferris 2 days ago [-]
> floats can be NaN and integers should be low(int) if they are invalid (low(int) is a pointless value anyway as it has no positive equivalent).
I have long thought that we need a NaI (not an integer) value for our signed ints. Ideally, the CPU would have overflow-aware instructions similar to floats that return this value on overflow and cost the same as wrapping addition/multiplication/etc.
mikepurvis 2 days ago [-]
From an implementation point of view, it would be similar to NaN; a designated sentinel value that all the arithmetic operations are made aware of and have special rules around producing and consuming.
fithisux 21 hours ago [-]
R has it.
ninjaquiv 1 days ago [-]
Does Nimony/Nim 3.0 have pattern matching, or any plans for it?
I wonder how Nim 3/Nimony handles or will handle bindings in patterns regarding copy, move or reference. Rust can change it per binding, and Ada's experimental pattern matching might have some plans or properties regarding that.[1]
> By default, identifier patterns bind a variable to a copy of or move from the matched value depending on whether the matched value implements Copy.
> This can be changed to bind to a reference by using the ref keyword, or to a mutable reference using ref mut. For example:
match a {
None => (),
Some(value) => (),
}
match a {
None => (),
Some(ref value) => (),
}
.
The Github issue had a strange discussion. I really disliked goteguru's equals-sign-based syntax, though I had difficulty judging the main design syntax.
I wonder what Araq thinks of Scala's Expression AST type. Tree, TermTree, and all the subtype case classes [2]. Tree has fields. Though I am not certain how the common variables are initialized.
>WCET ("worst case execution time") is an important consideration: Operations should take a fixed amount of time and the produced machine code should be predictable.
Good luck. Give the avionics guys a call if you solve this at the language level.
kapija 1 days ago [-]
great, now they just need to fix the whitespaces and people will start using it
xigoi 12 hours ago [-]
By modularizing the compiler, it will presumably be easier to create syntactic skins for Nim. I’m planning to make an S-expression syntax if nobody else does.
hota_mazi 1 days ago [-]
> It is not possible to say which exceptions are possible
So repeating the same mistake that Spring made by using runtime exceptions everywhere.
Now you can never know how exactly a function can fail, which means you are flying completely blind.
Rendered at 04:42:55 GMT+0000 (Coordinated Universal Time) with Vercel.
Special values like NaN are half-assed sum types. The latter give you compiler guarantees.
https://forum.nim-lang.org/t/9596#63118
That being said, I agree that the way NaNs propagate is messy. You can end up only finding out that there was an error much later during the program's execution and then it can be tricky to find out where it came from.
From memory, I have heard "infecting all downstream" as both "a feature" and "a problem". Experience with numpy programs did lead to sentinels in the https://github.com/c-blake/nio Nim package, though.
Another way to try to investigate popularity here is to see how much code uses signaling NaN vs. quiet NaN and/or arguments pro/con those things / floating point exceptions in general.
I imagine all of it comes down to questions of how locally can/should code be forced to confront problems, much like arguments about try/except/catch kinds of exception handling systems vs. other alternatives. In the age of SIMD there can be performance angles to these questions and essentially "batching factors" for error handling that relate to all the other batching factors going on.
Today's version of this wiki page also includes a discussion of Integer Nan: https://en.wikipedia.org/wiki/NaN . It notes that the R language uses the minimal signed value (i.e. 0x80000000) of integers for NA.
There is also the whole database NULL question: https://en.wikipedia.org/wiki/Null_(SQL)
To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view.
That's fair, I wasn't dimsissing the practice but rather just commenting that it's a shame the author didn't clarify their preference.
I don't think the popularity angle is a good proxy for usefulness/correction of the practice. Many factors can influence popularity.
Performance is a very fair point, I don't know enough to understand the details but I could see it being a strong argument. It is counter intuitive to move forward with calculations known to be useless, but maybe the cost of checking all calculations for validity is larger than the savings of skipping early the invalid ones.
There is a catch though. Numpy and R are very oriented to calculation pipelines, which is a very different usecase to general programming, where the side effects of undetected 'corrupt' values can be more serious.
Anyway, this topic of "error handling scoping/locality" may be the single most cross-cutting topic across CPUs, PLangs, Databases, and operating systems (I would bin Numpy/R under Plangs+Databases as they are kind of "data languages"). Consequently, opinions can be very strong (often having this sense of "Everything hinges on this!") in all directions, but rarely take a "complete" view.
If you are interested in "fundamental, not just popularity" discussions, and it sounds like you are, I feel like the database community discussions are probably the most "refined/complete" in terms of trade-offs, but that could simply be my personal exposure, and DB people tend to ignore CPU SIMD because it's such a "recent" innovation (hahaha, Seymore Cray was doing it in the 1980s for the Cray-3 Vector SuperComputer). Anyway, just trying to help. That link to the DB Null page I gave is probably a good starting point.
In my opinion it’s overall cleaner if the compiler handles enforcing it when it can. Something like “ensure variable is initialized” can just be another compiler check.
Combined with an effects system that lets you control which errors to enforce checking on or not. Nim has a nice `forbids: IOException` that lets users do that.
Only sometimes, when the compiler happens to be able to understand the code fully enough. With sum types it can be enforced all the time, and bypassed when the programmer explicitly wants it to be.
tbh this system (assuming it works that way) would be more strict at compile-time than the vast majority of languages.
If you actually want the compiler to check this on the level of the type system, it'd have to be `NonNaNFloat | NaN`. Then you can check which one you have and continue with a float that is guaranteed to not be NaN.
But (importantly) a NonNaNFloat is not the same as a float, and this distinction has to be encoded in the type system if you want to take this approach seriously. This distinction is NOT supported by most type systems (including Rust's std afaik, fwiw). It's similar to Rust's NonZero family of types.
Hypothetically, no, the float type would not admit NaNs. You would be prevented from storing NaNs in them explicitly, and operations capable of producing NaNs would produce a `float | nan` type that is distinct from float, and can't be treated like float until it's checked for NaN.
And I'm not sure why it's being discussed as though this is some esoteric language feature. This is precisely the way non-nullable types work in languages like Kotlin and TypeScript. The underlying machine representation of the object is capable of containing null values, yes, but the compiler doesn't let you treat it as such (without certain workarounds).
This is fine, I guess, but it will cause a bunch of problems since e.g. Division of two floats has to be able to return NaNs. At that point you either need to require a check to see if the value is NaN (inconvenient and annoying) or allow people to just proceed. Not sure I am exactly sold on this so far.
The parent commenter stated that sum types work differently from a hypothetical float / NaN split, because compilers can't always "understand the code fully enough" to enforce checks. I simply responded that that is not true in principle, since you could just treat non-nan floats the same way that many languages treat non-null types.
Indeed, everything you're describing about non-nan floats applies equally to sum types - you can't operate on them unless you pattern match. You're highlighting the exact point I'm trying to make!
The fact that you consider this system "inconvenient", is entirely irrelevant to this discussion. Maybe the designer of Nim simply cares more about NaN-safety than you or I do. Who knows. Regardless, the original statement (that sum types and non-nan floats can't work the same way) is incorrect.
Unfortunately, Rust doesn't seem to be smart enough to represent `Option<NotNan<f64>>` in 8 bytes, even though in theory it should be possible (it does the analogous thing with `Option<NonZero<u64>>`).
This thread is discussing the possibility of adding such an optimization: https://internals.rust-lang.org/t/add-float-types-with-niche...
An approach that I think would have most of the same correctness benefits as a proper sum type while being more ergonomic: Have two float types, one that can represent any float and one that can represent only finite floats. Floating-point operations return a finite float if all operands are of finite-float type, or an arbitrary float if any operand is of arbitrary-float type. If all operands are of finite-float type but the return value is infinity or NaN, the program panics or equivalent.
(A slightly more out-there extension of this idea: The finite-float type also can't represent negative zero. Any operation on finite-float-typed operands that would return negative zero returns positive zero instead. This means that finite floats obey the substitution property, and (as a minor added bonus) can be compared for equality by a simple bitwise comparison. It's possible that this idea is too weird, though, and there might be footguns in the case where you convert a finite float to an arbitrary one.)
I was thinking about this the other day for integer wrapping specifically, given that it's not checked in release mode for Rust (by default at least, I think there's a way to override that?). I suspect that it's also influenced by the fact that people kinda expect to be able to use operators for arithmetic, and it's not really clear how to deal with something like `a + b + c` in a way where each step has to be fallible; you could have errors propagate and then just have `(a + b + c)?`, but I'm not sure that would be immediately intuitive to people, or you could require it to be explicit at each step, e.g. `((a + b)? + c))?`, but that would be fairly verbose. The best I could come up with is to have a macro that does the first thing, which I imagine someone has probably already written before, where you could do something like `checked!(a + b + c)`, and then have it give a single result. I could almost imagine a language with more special syntax for things having a built-in operator for that, like wrapping it in double backticks or something rather than `checked!(...)`.
I suppose there's precedent of sorts in signaling NaNs (and NaNs in general, since FPUs need to account for payloads), but I don't know how much software actually makes use of sNaNs/payloads, nor how those features work in GPUs/super-performance-sensitive code.
I also feel that as far as Rust goes, the NonZero<T> types would seem to point towards not using the described finite/arbitrary float scheme as the NonZero<T> types don't implement "regular" arithmetic operations that can result in 0 (there's unsafe unchecked operations and explicit checked operations, but no +/-/etc.).
The key disanalogy between NonZero and the "finite float" idea is that zero comes up all the time in basically every kind of math, so you can't just use NonZero everywhere in your code; you have to constantly deal with the seam converting between the two types, which is the most unwieldy part of the scheme. By contrast, in many programs infinity and NaN are never expected to come up, and if they do it's a bug, so if you're in that situation you can just use the finite-float type throughout.
I suppose that's a fair point. I guess a better analogy might be to operations on normal integer types, where overflow is considered an error but that is not reflected in default operator function signatures.
I do want to circle back a bit and say that my mention of signaling NaNs would probably have been better served by a discussion of floating point exceptions more generally. In particular, I feel like existing IEEE floating point technically supports something like what you propose via hardware floating point exceptions and/or sNaNs, but I don't know how well those capabilities are actually supported (e.g., from what I remember the C++ interface for dealing with that kind of thing was clunky at best). I want to say that lifting those semantics into programming languages might interfere with normally desirable optimizations as well (e.g., effectively adding a branch after floating point operations might interfere with vectorization), though I suppose Rust could always pull what it did with integer overflow and turn off checks in release mode, as much as I dislike that decision.
If my grandmother had wheels, she'd be a bike.
Oh, and as someone else pointed out you can also just `from std/errorcodes import nil` and then you _have_ to specify where things come from.
It's similar to how Ruby (which also has "unstructured" imports) and Python are similar in a lot of ways yet make many opposite choices. I think a lot of Ruby's choices are "wrong" even though they fit together within the language.
What is preventing this import std/errorcodes
from allowing me to use: raise errorcodes.RangeError instead of what Nim has?
or even why not even "import std/ErrorCodes" and having the plural in ErrorCodes.RangeError I wouldn't mind
If instead you do `from math import nil` only the latter two work.
You can revive that, though, with a
That most Nim code you see will not do this is more a cultural/popularity thing that is kind of a copy-paste/survey of dev tastes thing. It's much like people using "np" as the ident in `import numpy as np`. I was doing this renaming import before it was even widely popular, but I used capital `N` for `numpy` and have had people freak out at me for such (and yet no one freaking out at Travis for not just calling it `np` in the first place).So, it matters a little more in that this impacts how you design/demo library code/lib symbol sets and so on, but it is less of a big deal than people make it out to be. This itself is much like people pretending they are arguing about "fundamental language things", when a great deal of what they actually argue about are "common practices" or conventions. Programming language designers have precious little control over such practices.
Not the best but there is precedent.
I would agree, whether error values are in or out of band is pretty context dependent such as whether you answered a homework question wrong, or your dog ate it. One is not a condition that can be graded.
I view both of these as not great. If you strictly want to rely on wraparound behavior, ideally you specify exactly how you're planning to wrap around in the code.
Isn't that implementation-defined behavior?
Edit: I guess they could get rid of a few numbers... Anyhow it isn't a philosophy that is going to get me to consider nimony for anything.
Not if you compile with optimizations on. This C code:
Compiles to this x86-64 assembly (with clang -O2): Which, for those who aren't familiar with x86 assembly, is just the normal instruction for adding two numbers with wrapping semantics.I think the fib example is actually cool though. Integers are not the only possible domain. Everything that supports <=, +, and - is. Could be int, float, a vector/matrix, or even some weird custom type (providing that Nim has operator overloading, which it seems to).
May not make much sense to use anything other than int in this case, but it is just a toy example. I like the idea in general.
Nim is Choice in many dimensions that other PLang's are insistently monosyllabic/stylistic about - gc or not or what kind, many kinds of spelling, new operator vs. overloaded old one, etc., etc., etc. Some people actually dislike choice because it allows others to choose differently and the ensuing entropy creates cognitive dissonance. Code formatters are maybe a good example of this? They may not phrase opposition as being "against choice" as explicitly as I am framing it, but I think the "My choices only, please!" sentiment is in there if they are self-aware.
Granted, I don’t know nim. Maybe you can’t define + and - operators for non numbers?
You could add a `SomeNumber` predicate to the `concept` to address that concern. `SomeNumber` is a built-in typeclass (well, in `system.nim` anyway, but there are ways to use the Nim compiler without that or do a `from system import nil` or etc.).
Unmentioned in the article is a very rare compiler/PLang superpower (available at least in Nim 1, Nim 2) - `compiles`. So, the below will print out two lines - "2\n1\n":
Last I knew "concept refinement" for new-style concepts was still a work in progress. Anyway, I'm not sure what is the most elegant way to incorporate this extra constraint, but I think it's a mistake to think it is unincorporatable.To address your question about '+', you can define it for non-SomeNumber, but you can also define many new operators like `.+.` or `>>>` or whatever. So, it's up to your choice/judgement if the situation calls for `+` vs something else.
FWIW, the "catenation operator" in the Nim stdlib is ampersand `&`, not `+` which actually makes it better than most PLangs at visually disambiguating things like string (or other dynamic array, `seq[T]` in Nim) concatenation from arithmetic. So, `a&b` means `b` concatenated onto the end of `a` while `a+b` is the more usual commutative operation (i.e. same as `b+a`). Commutativity is not enforced by the basic dispatch on `+`, though such might be add-able as a compiler plugin.
Mostly, it's just a very flexible compiler / system.. like a static Lisp with a standard surface syntax closer to Python with a lot of parentheses made optional (but I think much more flexible and fluid than Python). Nim is far from perfect, but it makes programming feel like so much less boilerplate ceremony than most alternatives and also responds very well to speed/memory optimization effort.
However, the base case being 1 does not preclude other types than integers, as cb321 pointed out.
And FWIW there are many possible types for T, as small integer constants are compatible with many types. And because of the "proc `<=`(a, b: Self): bool" in the concept definition of Fibable, the compiler knows that "2" is a constant of type T ... so any type that has a conversion proc for literals (remember that Nim has extensive compile-time metaprogramming features) can produce a value of its type given "2".
There are over 2200 nimble packages now. Maybe not an eye-popping number, but there's still a good chance that somewhere in the json at https://github.com/nim-lang/packages you will find something interesting. There is also RosettaCode.org which has a lot of Nim example code.
This, of course, does not speak to the main point of this subthread about the founder but just to some "side ideas".
Waiter: "How is everything?"
Customer: "Great!"
Waiter, disgusted: "Even war?"
I have long thought that we need a NaI (not an integer) value for our signed ints. Ideally, the CPU would have overflow-aware instructions similar to floats that return this value on overflow and cost the same as wrapping addition/multiplication/etc.
Design: https://github.com/nim-lang/RFCs/issues/559
Plan: https://forum.nim-lang.org/t/13357#81170
I wonder how Nim 3/Nimony handles or will handle bindings in patterns regarding copy, move or reference. Rust can change it per binding, and Ada's experimental pattern matching might have some plans or properties regarding that.[1]
> By default, identifier patterns bind a variable to a copy of or move from the matched value depending on whether the matched value implements Copy.
> This can be changed to bind to a reference by using the ref keyword, or to a mutable reference using ref mut. For example:
.The Github issue had a strange discussion. I really disliked goteguru's equals-sign-based syntax, though I had difficulty judging the main design syntax.
I wonder what Araq thinks of Scala's Expression AST type. Tree, TermTree, and all the subtype case classes [2]. Tree has fields. Though I am not certain how the common variables are initialized.
[1] https://doc.rust-lang.org/reference/patterns.html#r-patterns...
[2] https://github.com/scala/scala3/blob/main/compiler/src/dotty...
Good luck. Give the avionics guys a call if you solve this at the language level.
So repeating the same mistake that Spring made by using runtime exceptions everywhere.
Now you can never know how exactly a function can fail, which means you are flying completely blind.