FreeBSD Dynamic Programming

D

Deleted member 70435

Guest
If you’ve been programming for long enough, you’ve probably heard the term dynamic programming. Often a key subject in technical interviews, the idea will also come up in design review meetings or regular interactions with fellow developers. This essay will examine what dynamic programming is and why you would use it. I’ll be illustrating this concept with specific code examples in Swift, but the concepts I introduce can be applied to your language of choice. Let’s begin!

A way of thinking​

Unlike specific coding syntax or design patterns, dynamic programming isn’t a particular algorithm but a way of thinking. Therefore, the technique takes many forms when it comes to implementation.

The main idea of dynamic programming is to consider a significant problem and break it into smaller, individualized components. When it comes to implementation, optimal techniques rely on data storage and reuse to increase algorithm efficiency. As we’ll see, many questions in software development are solved using various forms of dynamic programming. The trick is recognizing when optimal solutions can be devised using a simple variable or require a sophisticated data structure or algorithm.

For example, code variables can be considered an elementary form of dynamic programming. As we know, a variable’s purpose is to reserve a specific place in memory for a value to be recalled later.

Code:
//non-memoized function
func addNumbers(lhs: Int, rhs: Int) -> Int {
   return lhs + rhs
}

//memoized function
func addNumbersMemo(lhs: Int, rhs: Int) -> Int {
   let result: Int = lhs + rhs
     return result
}

While addNumbersMemo above does provide a simple introduction, the goal of a dynamic programming solution is to preserve previously seen values because the alternative could either be inefficient or could prevent you from answering the question. This design technique is known as memoization.

A brute force approach​

Our first approach involves looking at the first value, then reviewing each subsequent value to determine if it will provide the difference needed to solve the question. For example, once our algorithm checks the value of the first array item, 8, it will then scan the remaining values for 3 (e.g., 11 – 8 = 3). However, we can see the value of 3 doesn’t exist, so the algorithm will repeat the same process for the next value (in our case, 10) until it finds a successful matching pair. Without going into the details of big-O notation, we can assume this type of solution would have an average runtime of O(n ^ 2)time or greater, mainly because our algorithm works by comparing each value with every other value. In code, this can be represented as follows:

Code:
let sequence = [8, 10, 2, 9, 7, 5]

//non-memoized version - O(n ^ 2)
func pairNumbers(sum: Int) -> (Int, Int) {
 
    for a in sequence {
            let diff = sum - a
 
            for b in sequence {
                if (b != a) && (b == diff) {
                    return (a, b)
                }
            }
    }
    return (0, 0)
}

A memoized approach​

Next, let’s try a different approach using the idea of memoization. Before implementing our code, we can brainstorm how storing previously seen values will help streamline the process. While using a standard array is feasible, a set collection object (also referred to as a hash table or hash map) could provide an optimized solution.

Code:
//memoized version - O(n + d)
func pairNumbersMemoized(sum: Int) -> (Int, Int) {
 
    var addends = Set<Int>()
 
    for a in sequence {
            let diff = sum - a
 
        if addends.contains(diff) { //O(1) - constant time lookup
                    return (a, diff)
            }
            //store previously seen value
            else {
                addends.insert(a)
            }
      }
 
    return (0, 0)
 }

Using a memoized approach, we’ve improved the algorithm’s average run time efficiency to O(n + d) by adding previously seen values to a set collection object. Those familiar with hash-based structures will know that item insert and retrieval occurs in O(1) – constant time. This further streamlines the solution, as the set is designed to retrieve values in an optimized way regardless of size.

The Fibonacci sequence​

When learning various programming techniques, one topic that comes to mind is recursion. Recursive solutions work by having a model that refers to itself. As such, recursive techniques are implemented through algorithms or data structures. A well-known example of recursion can be seen with the Fibonacci sequence—a numerical sequence made by adding the two preceding numbers (0, 1, 1, 2, 3, 5, 8, 13, 21, etc):

Code:
public func fibRec(_ n: Int) -> Int {
        if n < 2 {
            return n
        } else {
            return fibRec(n-1) + fibRec(n-2)
        }
 }

When examined, our code is error-free and works as expected. However, notice a few things about the performance of the algorithm:

Positions (n)fibRec() – Number of times called
21
45
10109
151219

As shown, there’s a significant increase in the number of times our function is called. Similar to our previous example, the algorithm’s performance decreases exponentially based on the input size. This occurs because the operation does not store previously calculated values. Without access to stored variables, the only way we can obtain the required (preceding) values is through recursion. Assuming this code is used in a production setting, the function could introduce bugs or performance errors. Let’s refactor the code to support a memoized approach:

Code:
func fibMemoizedPosition(_ n: Int) -> Int {

        var sequence: Array<Int> = [0, 1]
        var results: Int = 0
        var i: Int = sequence.count
 
        //trivial case
        guard n > i else {
            return n
        }
 
        //all other cases..
        while i <= n {
            results = sequence[i - 1] + sequence[i - 2]
            sequence.append(results)
            i += 1
        }
 
        return results
}

This revised solution now supports memoization through the use of stored variables. Notice how the refactored code no longer requires a recursive technique. The two most previous values are added to a result, which is appended to the main array sequence. Even though the algorithm’s performance still depends on the sequence size, our revisions have increased algorithmic efficiency to O(n) – linear time. In addition, our iterative solution should be easier to revise, test and debug since a single function is added to the call stack, thus reducing complexities with memory management and object scope.

Conclusion​

We’ve learned that dynamic programming isn’t a specific design pattern as it is a way of thinking. Its goal is to create a solution to preserve previously seen values to increase time efficiency. While examples include basic algorithms, dynamic programming provides a foundation in almost all programs. This includes the use of simple variables and complex data structures.
 
certainly, you haven't understood yet. that's an idea clearly, you can have a vision of how to implement something. certainly in another session. of course it has nothing to do with FreeBSD, but you would develop something without an idea maybe not, maybe yes you read the manual of FreeBSD.. something so simple but it makes some people really understand,

what you don't understand here, is how it's going to be implemented, that's in the algorhythmic notion, in FreeBSD
 
what you don't understand here, is how it's going to be implemented, that's in the algorhythmic notion, in FreeBSD
Are you saying this sort of code design should go into FreeBSD itself? I don't think that is a bad idea but it might take some time to find specific areas to start with where it might be a benefit (perhaps logic for dealing with filesystem nodes? file descriptors? task scheduling?). For that, the developers mailing list (and just analysing the code) will be useful.
 
Data + algorithms = programs

No offense to the OP, but all programming is a way of thinking.
Object Oriented? A way of thinking. You can implement it in C, C++, Java, whatever you want.
Dynamic programming seems to be the current iteration of past thinking.
I've seen a lot of these concepts presented in relation to a langague: Swift, Go, Rust, Java, JavaScript, Python, etc.

I realize the first example is trivial, but I think a good compiler would result in the same machine code, a bunch of register loads, an arithmetic operation, return of a register. There is no benefit in future calls of having "result" being a variable. If it's persistent (like a static in C/C++), you wind up wasting because you are always overwriting.

I think all of these programming paradigms boil down to "it's not a bad idea, but it may not be the best idea"
 
WARNING: Highly personal, friday-evening based opinion incoming. Furthermore, OP clearly stated that what he tries to convey can be applied to "the language of your choice". So this is no criticism towards OP.

mer That's one of the reasons why I (currently?) like modern C++ so much. While most popular language try to address (they often call it "fix") a specific problem, modern C++ just allows you to pick what you think is most suitable. You can do OOP, you can do functional, you can do concurrency, you can do memory-safe, you can do metaprogramming, you can do <blank> - and you can mix and match those at will!

Ironically enough, those popular languages often seem to fail at "fixing" the very problem they were set out to "fix" in the first place if they evolve for long enough. At the same time, you have to deal with additional disadvantages and loss in flexibility.
I guess Java is the textbook example here with it being "the solution for embedded".

I'm a strong advocat for ensuring that developers/programmers/engineers understand the very core principles of computer science and the underlying mechanisms. Something that seems to have been lost in time. These days it's mainly about getting a result as quickly as possible. Yeah, lets use rust because I can´t make memory management mistakes. Lets use Go because I don´t have to worry about thread safety - and then you're suddenly stuck in an ecosystem that is less flexible, often more cumbersome in different ways and you still didn't learn the basics which are needed for proper design, implementation & debugging of whatever it is you're attempting to do.

Obviously (and as usual from me), I don´t mean to say that C++ is the holy grail and everything else is shit. That's certainly not the case. I'm a fan of "the right tool for the job". I just like that - at least the stuff I usually do - C++ can be the right tool.
C++ obviously has downsides, shortcomings and other problems too. We'll always find examples to support whatever argument you choose to make.
 
Warning, Early Friday PTO before a weekend opinion :)
jbodenmann Glad you specified modern C++. The language has evolved a ton since the beginning, toss in the C++ STL and one can do a lot quickly.

I think lots of languages have a place: java I think in good for quick prototyping and stuff with GUIs (especially cross platform).

Principles and fundamentals. Goes back to my "data + algorithms = programs". I can't locate the book on my shelf immediately but it's an old book that still holds true.
 
we can say that rust has advanced in an impressive way. today you write a kernel module in rust and you have a whole development environment ready. I'm not saying that rust is better but we have to do an analysis really

the C C++ language, are excellent for embedded systems and other components today that are gaining the market, today you have almost a whole structure for both parts.

you can develop a complete operating system in rust, the kernel completely in rust. maybe you worry about something Hardware related with no problem, as it's something who has a community, the driver can be written easily, and implemented and finally compiled, and the hardware is working as a module

I have to add another thing to the question I raised, rust for systems in general is a great language and the language of the future too, because of its modularity. you have to, solve a problem. the problem is that the developers don't accept it so easily. but we can have projects open on github with several projects
many different.
 
and the language of the future too, because of its modularity.
I don't quite agree with this. The webdev Javascript/NPM style modularity approach that crates.io is copying does tend to crumble after a while. Dragging in dependencies left, right and center as a dependency oriented approach racks up horrific technical debt that doesn't seem to stand the test of time well.

A lean but stable standard library like C has had a provable success. It may not be objectively better as such but in practice tends to win out in the end.
 
Also a Friday evening take (half a beer in too!). I just happened to watch this today:
Heh, there used to be quite a funny diagram with all the different rules involves in value / zero / default initialization in C++. Annoyingly I can't quite track it down. It was very large and very complex ;)

I tend to use something similar to boost's value init (but not boost itself because it is a cesspit!) to just initialize my variables automatically and be done with it. Some developers moan that it is "slow" and yet don't seem to understand the other languages that they love (such as Rust) do this by default anyway...

The more I learn about modern C++ the more I think it's time to look at something else.
I like C++ but I am looking into moving away from the standard library because it is still not safe enough and is starting to change a little too much to cater for all these new operators they are inventing. I will be seen as a bit of a heretic and pariah but honestly the last thing that came out that was good was around c++0x with Technical Report 1's shared/weak_ptr.
 
  • Like
Reactions: mer
Or change for the new people coming in who want change for the sake of change and programming is "tooooo haaaaaard". Rather than do the work, they want an app for that.
Definitely, some of the newer features are a crutch for weak (mostly careless) developers. At the same time it is really annoying to see interns (its always the interns for some reason) make a real mess for themselves with the newer async stuff. They seem attracted by it (perhaps due to Javascript tutorials).

You also see auto used too often in areas where the type can't be easily seen so you have to churn through all the code to find it. Nothing more annoying than seeing:

Code:
auto info = partition->getItemNotificationInfo();

What is the variable type? An Info, an ItemInfo, an ItemNotification, a Notification? You end up having to jump to the function to check whilst at the same time watching others wait for their IDE to catch up and then finally use the intellisense to decipher it. Very slow workflow.
 
One thing about all these "new and improved with a free set of Ginsu knives" features being added are you don't need to use them.
I agree on the shared/weak_ptr stuff and some of the stuff in the STL is useful so you don't need to reinvent the wheel (container stuff like the lists and maps).
One just needs to be mindful of any threading issues and dtors on object.

"You also see auto used too often in areas where the type can't be easily seen so you have to churn through all the code to find it."

Yep. A huge difference between writing code and debugging/maintaining it. Too many people "program" but someone else has to integrate and find the root cause for a bug.
 
One just needs to be mindful of any threading issues and dtors on object.
Multi-threading is one specific reason for me to avoid the standard library, it just isn't geared up for it. They are trying with lockless atomics and things like that but it is the design that isn't really suitable. The smart-pointers and (atomic) reference counting need to be considerably different in my opinion.

Too many people "program" but someone else has to integrate and find the root cause for a bug
"Write-only" code ;)
 
Or change for the new people coming in who want change for the sake of change and programming is "tooooo haaaaaard". Rather than do the work, they want an app for that.
And the irony being they're actually making it much harder. Mr. Josuttis teaches C++ to beginners, and part of that talk is him complaining that he doesn't know what to teach. One of the funnier slides is where he points out there are at least 19 ways to initialize an int.

You also see auto used too often in areas where the type can't be easily seen so you have to churn through all the code to find it. Nothing more annoying than seeing...
You are such a heretic Mr. Pedersen! Haven't you heard that you should AAA (Almost Always use Auto)? Josuttis has a great slide where he makes fun of this.
 
Code:
auto info = partition->getItemNotificationInfo();

What is the variable type? An Info, an ItemInfo, an ItemNotification, a Notification? You end up having to jump to the function to check whilst at the same time watching others wait for their IDE to catch up and then finally use the intellisense to decipher it. Very slow workflow.
On the other hand, once you declare a type you immediately know all of the methods you can call without ever consulting the documentation (or looking at the class declaration), right?
 
At first i refused to use auto since i am a bit conservative and sceptic about things added to "modern" C++.
But now i use it everywhere i can. Simplifies things a lot.
 
Use of auto can indeed simplify the way the code looks because you can have shorter lines because it's only 4 characters. And when you are simply writing code it's faster to type. (yes, that is sarcasm there).

If you declare a type, no of course you don't immediately know everything about it, but at least you have a starting point for grepping. That makes finding and fixing (root causing) bugs just a little bit easier.
The line of code quoted you need to start by looking at what "partition" is, if there is only a single method named getItemNotificationInfo() then you know what type "info" is. But if getItemNotificationInfo() is overloaded, then you spend more time looking at how "info" is used before you know what type it is.

Toss in C++ classes and you may wind up spending more time looking at code to figure out what "info" is.

Whole lot of difference between simply writing code and having to pick up someone else's code months later to fix a bug.

Maybe auto is good because it can help focus on the "what something does" instead of "how something is done" (thinking Object Oriented design).
Maybe it's good because you can write a good algorithm once using auto and then cut and paste it everywhere in your code without having to edit it as much, but of course one could use a template for that too.

One of the best things about all these new things added to C++:
You don't have to use them.
 
On the other hand, once you declare a type you immediately know all of the methods you can call without ever consulting the documentation (or looking at the class declaration), right?
Of course.

Code:
ClassContainingIdNameEmployeeNumberDepartmentPointerAndManagerId myemp = findEmp();

I also dislike it when *lazy* programmers aren't descriptive with their naming. ;)

At the very least though, once you know the type's name, you generally know which header to scan through.
 
If you're going to use auto it would be
auto myEmp = findEmp();

I don't think this would even compile, you are trying to use a class name as a variable name.

auto ClassContainingIdNameEmployeeNumberDepartmentPointerAndManagerId = findEmp();

;)
 
Back
Top