I think F# programmers lack that gamut because they get comfortable in the eager execution type safe world and stay there with no particular reason to learn dynamic programming techniques. There is also the effect that it allows less advanced functional programmers to be productive so that in randomly sampling currently active functional programmers the F# programmer is less likely to be advanced.
Scala developers were referred to a Java refugees, Swift developers to Objective C refugees, and F# as C# refugees. A weird side effect of Microsoft doing a better job with C# is that there less of a push to F#. Plus F# by virtue of being in Dev Div had the core value proposition (Ocaml on .Net) undermined by the Win vs Dev Div internal battles that tried and failed to kill .Net.
I have been programming for 20 years, and yet despite having used dynamic languages I don’t actually know what it means to leverage dynamic programming techniques. For instance, I’ve never encountered a JavaScript codebase that I have thought couldn’t benefit from just being statically typed with Typescript. I get the impression that dynamic programming, besides the odd untyped line of code, is best used only for extremely specific cases?
The problem is the word dynamic is overloaded, and I'm not at all sure which one your parent comment meant.
"Dynamic programming" traditionally has nothing to do with dynamic languages but is instead a class of algorithms that are "dynamic" in the sense that they represent time.[0] This might be what your parent was referring to because these algorithms lend themselves well to Haskell's lazy evaluation, and they reference F# as being eager.
That said, they also talk about F# as being type safe, so they could also be referring to dynamic programming languages. The grandparent was definitely referring to this one, but "dynamic programming techniques" sounds much more like the algorithmic meaning.
To be clear I wasn't referring 'dynamic programming' but as you say, the use of a dynamic language, or programming without types, mimicking what I assumed the original poster I replied to meant.
My guess is that interviewees wishing to return to the typed world they are comfortable with would first try to type the JSON they are working with. Given that the JSON is messy this could be an unbounded amount of work that is unlikely to pay-off within the span of an interview.
Ok that is very confusing because "dynamic programming" is a very specific thing, and also super popular in leetcode questions. Maybe half the questions on leetcode.com involve dynamic programming.
It has absolutely nothing to do with dynamically typed programming. It's also a really terrible name for what is essentially caching.
By untyped I assume you mean dynamic languages? I’m some contexts it’s not convenient to lug around a type checker, embedded languages for example. Other times if doing macro heavy programming (lisp, forth) it’s hard to build a type system that can properly type check the code or resolve the implicit types in a reasonable amount of time.
In the context of JSON you can work on it without types from a typed language. It’s just that as a force of habit coders may chose to spend time adding types to things when they shouldn’t.
> For instance, I’ve never encountered a JavaScript codebase that I have thought couldn’t benefit from just being statically typed with Typescript
That's the type bias. If you look at a non-typed codebase, it always feels like it will be better with types. But if you had a chance to go back in time and start the same codebase in Typescript, it would actually come out way worse than what you have today.
Types can be great when used sparingly, but with Typescript everyone seems to fall into a trap of constantly creating and then solving "type puzzles" instead of building what matters. If you're doing Typescript, your chances of becoming a product engineer are slim.
There is much naïveté among the strongly typed herd. When tasked to create glue code translating between two opposing type systems, which is a very common data engineering task, reaching for a strongly typed language is never the best option for code complexity and speed of development. Yet the hammer will often succeed if you hit hard enough and club the nail back to shape when you invariably bend it.
There is much naïveté among the strongly typed herd.
Is exactly the reverse also true? Let me try: "There is much naïveté among the weakly typed herd." For every person who thinks Python or Ruby can be used for everything, there is another person who thinks the same for C++ or Rust.
Also, the example that you gave is incredibly specific:
When tasked to create glue code translating between two opposing type systems, which is a very common data engineering task
Can you provide a concrete example? And what is "data engineering"? I never heard that term before this post.
I'm a data engineer, it's a fairly new role so it's not well defined yet, but most data engineers write data pipelines to ingest data into a data warehouse and then transform it for the business to use.
I'm not sure why using a static language would make translating data types difficult, but I add as many typehints as possible to my Python so I rarely do anything with dynamic types. I guess they're saying for small tasks where you're working with lots of types, when using a static language most of your code will be type definitions, so a dynamic language will let you focus on writing the transformation code.
Thank you to reply. Your definition of data engineer makes sense. From my experience, I would not call it a new role. People were doing similar things 25 years ago when building the first generation of "data warehouses". (Remember that term from the late 1990s!?)
I am surprised that you are using Python for data transformation. Isn't it too slow for huge data sets? (If you are using C/C++ libraries like Pandas/NumPy, then ignore this question.) When I have huge amounts of data, I always want to use something like C/C++/Rust/C#/Java do the heavy lifting because it is so much faster than Python.
Yes, it's definitely a new word for an old concept, same as the term data scientist for data analyst or statistician.
I find Python is fast enough for small to medium datasets. I've normally worked with data that needs to be loaded each morning or sometimes hourly, so whether the transformation takes 1 minute or 10 minutes it doesn't matter. The better way is of course to dump the data into a data warehouse as soon as possible and then use SQL for everything, so I only use Python for things that SQL isn't suited for, like making HTTP requests.
Using a static language to manipulate complex types, particularly those sourced from a different type system (say complex nested Avro, SQL, or even complex JSON) is much more awkward when the types cannot be normalized into the language automatically as can be done with dynamic languages. Static languages require more a priori knowledge of data types, and are very awkward at handling collections with diverse type membership. Data has many forms in reality -- dynamic languages are much more effective at manipulating data on its own terms.
You realize every single thing that dynamically-typed languages can do with data types, statically-typed languages can do too? Except when it matters, they can also choose to do things dynamically-typed languages can't.
Lots of people assume static typing means creating domain types for the semantics of every single thing, and then complain that those types contain far more information than they need. Well, stop doing that. Create types that actually contain the information you need. Or use the existing ones. If you're deserializing JSON data, it turns out that the deserialization library already has a type for arbitrary JSON. Just use it, if all you're doing is translating that data to another format. Saying "this data is JSON I didn't bother to understand the internal content of" is a perfectly fine level to work at.
About monkeypatching, perhaps we have difference definitions. From time to time, I need to modify a Java class from a dependency that I do not own/control. I copy the decompiled class into my project with the same package name. I make changes, then run. To me, this is monkeypatching for Java. Do you agree? If not, how is it different? I would like to learn. Honestly, I discovered that Java technique years ago by accident.
Another technique: While the JVM is running with a debugger attached, it is possible to inject a new version of a class. IDEs usually make this seamless. It also works when remote debugging. Do you consider this monkeypatching also?
> You can’t do monkeypatching or dynamically modify the inheritance chain of an object in a statically typed language.
There's no theoretical reason you can't. No languages that I know of provide that combination features, because monkey-patching is a terrible idea for software engineering... But there's no theoretical reason you couldn't make it happen.
I think you've conflated static typing with a static language. They're not the same thing and can be analyzed separately.
So how would a statically typed language support conditionally adding methods at runtime? Lets say the code adds a method with name and parameters specified by user input at runtime. How could this possibly be checked at compile time?
You could add methods that nothing could call, sure. It would be like replacing the value with an instance of an anonymous subclass with additional methods. Not useful, but fully possible. Ok, it would be slightly useful if those methods were available to other things patched in at the same time. So yeah, exactly like introducing an anonymous subclass.
But monkey-patching is also often used to alter behaviors of existing things, and that could be done without needing new types.
You would need another feature in addition: the ability to change the runtime type tag of a value. Then monkey-patching would be changing the type of a value to a subclass that has overridden methods as you request. The subclasses could be named, but it wouldn't have much value. As you could repeatedly override methods on the same value, the names wouldn't be of much use, so you might as well make the subclass anonymous.
In another dimension, you could use that feature in combination with something rather like Ruby's metaclasses to change definitions globally in a statically-typed language.
I can't think of a language that works this way currently out there, but there's nothing impossible about the design. It's just that no one wants it.
In a dynamic language, everything is only defined at runtime.
Given that, a sketch of a statically-typed system would be something like... At the time a definition is added to the environment, you type check against known definitions. Future code can change implementations, as long as types remain compatible. (Probably invariantly, unless you want to include covariant/invariant annotations in your type system...)
This doesn't change that much about a correct program in a dynamic language, except that it may provide some additional ordering requirements in code execution - all the various method definitions must be loaded before code using them is loaded. That's a bit more strict than the current requirement they the methods must be loaded before code using them is run. But the difference would be pretty tractable to code around.
And in exchange, you'd get immediate feedback on typos. Or even more complex cases, like failing to generate some method you had expected to create dynamically.
Ok, I can actually see some appeal here, though it's got nothing to do with monkey-patching.
I love using "mixed" dynamic/static typed languages in these scenarios... you can do that data manipulation without types, but benefit from types everywhere else... my two favourite "mixed" languages are Groovy on the JVM, and Dart elsewhere... Dart now has a very good type system, but still supports `dynamic` which makes it as easy as Python to manipulate data.
A major problem with doing data transformation in statically typed languages is that its easy to introduce issues during serialization and deserialization. If you have an object
class myDTO{
string name;
string value;
}
var myObjs= DerserializeFromFile<myDTO>(filepath)
SerializeToFile(myObjs, filePath2)
filepath2 would end up with without the extraProperty field.
You can also write code like
function PrintFullname(person) {
WriteLine(person.FirstName + “ “ + person.LastName)
}
And it will just work so long as the object has those properties. In a statically typed language, you’d have to have a version for each object type or be sure to have a thoughtful common interface between them, which is hard.
All that bring said, I generally prefer type safe static languages because that type system has saved my bacon on numerous occasions (it’s great at telling me I just changed something I use elsewhere).
You can write code in a statically typed language that treats the data as strings. The domain modelling is optional, just choose the level of detail that you need:
1. String
2. JSON
3. MyDTO
If you do choose 3, then you can avoid serde errors using property based testing
"Most" (I mean "all", but meh - I'm sure there's some obscure exception somewhere) parsers will have the ability to swap between a strict DTO interpretation of some data, and the raw underlying data which is generally going to be something like a map of maps that resolves to strings at the leaf nodes. Both have their uses. The same can also be done easily enough by hand as well, if necessary.
If you are truly interested in understanding my point of view -- a great way to do it would be to learn how to use this Clojure DSL: https://github.com/redplanetlabs/specter
You could also think about why Nathan Marz may have bothered to create it.
As for data engineering, I think ChatGPT could tell you a lot, and its training is dated from 2021.
As someone who tried very hard to incorporate specter into their speech-to-text pipeline, I feel compelled to point out, it gave me a lot of NullPointerExceptions while learning. I don't think it's a great example of the value of dynamically-typed langs.
In retrospect, Marz's hope that specter might get incorporated in clj core was wildly optimistic (even if the core team wasn't hostile to outsider contributions), because it feels like he built it to his own satisfaction, and never got around to removing the sharp edges that newcomers cut themselves on.
It's a shame, because I think specter is a cool idea, and would love to see a language based on its ideas.
They keep trying to kill .NET, just check how much WinDev keeps doubling on COM and pushing subpar frameworks like C++/WinRT.
One would expect that by now, out-of-process COM would be supported across all OS extension points, instead it is still pretty much in-process COM, with the related .NET restrictions.
Then there is the whole issue that since Longhorn, most OS APIs are based on COM (or WinRT), not always with .NET bindings, and even VB 6 had better ways to use COM than .NET on its current state (.NET Core lost COM tooling).
Doesn't look to me like they're trying to kill .NET at all. Maybe F# in particular isn't getting the love and attention it deserves but they'd have to be mental to be actively trying to kill off something as popular as .NET
Kill in the sense that from WinDev point of view, the less .NET ships on Windows the better.
In case you aren't aware, WinRT basically marks the turning point started with Longhorn ideas being rewriten into COM.
With WinRT, they basically went back to the drawing board of Ext-VOS, a COM based runtime for all Microsoft languages, hence why .NET on WinRT/UWP isn't quite the same as classical .NET and is AOT compiled, with classes being mapped into WinRT types (which is basically COM, with IInspectable in addition to IUnknown and .NET metadata instead of COM type libraries).
Mostly driven by Steven Sinofsky and his point of view on managed code.
This didn't turned out as expected, but still the idea going forward is to make WinRT additions to classical COM usable in Win32 outside UWP application identity model.
"Turning to the past to power Windows’ future: An in-depth look at WinRT"
Beyond WinRT, .NET (Core) has supported all the raw COM, including COM component hosting since at least .NET Core 3.0. It's Windows-Only, of course, to use that, but that should go without saying. It's mostly backwards compatible with the old .NET Fx 1.0 ways of doing COM and a lot of the old code still "just works". .NET has proven that with .NET 5+ and all the many ways it (again) supports raw Win32 fun in the classic WinForms ways. (And all the ways that even .NET Fx 1.0 code has a compatibility path into .NET 5.)
It would have been nice if Windows had stronger embraced .NET, but WinRT is still closer to .NET in spirit than old raw COM anyway.
.NET Core doesn't do COM type libraries like the Framework does, you are supposed to manually write IDL files like in the old days.
Additionally CCW/RCW infrastructure is considered outdated and you should use the new, more boilerplate based COM APIs introduced for COM and CsWinRT support.
Lots of changes, with more work, for little value.
This seems to be implying that F# programmers do 'poorly' because the language 'protects' them. But, isn't that good? Why have a language that purposely tries to trip you(dynamic), and you are a 'good' programmer if you don't get tripped. This seems to be rewarding people that are good at using a bad language. Since if you are using F# you don't learn the techniques for using other bad languages, doesn't make that programmer bad, it just means F# is more seamless.
Indeed, I’m suggesting an alternative explanation for the given observations based around the absence of a strong selection criteria bias. I’m of the strong opinion that F# is a great language and that people of different levels of skills can be productive in it. As opposed to a C++/lisp combo where only the most careful programmers get to keep both of their feet.
F# is a slice through the language design space that optimizes for developer productivity, other languages with different design choices are optimized and are indeed better for other things.
I think there is a benefit to learning learning ‘bad’ languages as they teach you about the different design trade offs that are available. A person with dynamic programming language experience would have known that the given JSON task was tractable without types and not started the task with a ‘type all the things’ mentality.
Scala developers were referred to a Java refugees, Swift developers to Objective C refugees, and F# as C# refugees. A weird side effect of Microsoft doing a better job with C# is that there less of a push to F#. Plus F# by virtue of being in Dev Div had the core value proposition (Ocaml on .Net) undermined by the Win vs Dev Div internal battles that tried and failed to kill .Net.