
This post is a personal summary after reading Multiparadigm Programming (by Yoo In-dong).
Summarizing the concepts covered in chapters 1 and 2 from the perspectives of imperative programming, object-oriented programming, and functional programming:
- Generators create iterators using imperative code
- They produce a lazy evaluation effect at
yield
points - This aligns with the LISP-like thinking that 'code is a list and a list is code'
- They produce a lazy evaluation effect at
- Iterators are implementations of the iterator pattern
- They take the iterator pattern from object-oriented programming and create imperative iterators
- Iterable iterators can be handled in imperative, object-oriented, and functional ways
- They can be handled like imperative code by executing the
next()
method - They can be made object-oriented by creating classes that handle iterable iterators
- They can be implemented functionally through higher-order functions
- They can be handled like imperative code by executing the
Code as Data - Lists Containing Logic
[for, i++, if, break]
- Thinking of Code as Lists
The mindset of viewing code as lists becomes a powerful tool for expanding programming paradigms. Using this thinking, you can write more readable and maintainable code in functional programming. Let's look at examples of how to write it well.
A Function Written Imperatively to Square and Sum n Odd Numbers
function sumOfSquaredOdds(limit: number, array: number[]): number {
let sum = 0;
for (const num of array) {
if (num % 2 === 1) {
sum += num * num; // Square and add
}
if (--limit === 0) break;
}
return sum;
}
// Usage example
const numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
console.log(sumOfSquaredOdds(3, numbers)); // 1² + 3² + 5² = 35
console.log(sumOfSquaredOdds(2, numbers)); // 1² + 3² = 10
Breaking down the operation of this function:
- Iteration (
for
) - Condition check (
if (num % 2 === 1)
) - Value transformation (
num * num
) - Accumulation (
sum +=
) - Control (
if (--limit === 0) break
)
Step-by-Step Functional Transformation
Step 1: Replace Conditionals with filter
function sumOfSquaredOdds(limit: number, array: number[]): number {
let sum = 0;
for (const num of filter(a => a % 2 === 1, array)) {
sum += num * num;
if (--limit === 0) break;
}
return sum;
}
Now inside the loop, we can focus solely on squaring and summing without needing to check if the number is odd.
Step 2: Replace Value Transformation with map
function sumOfSquaredOdds(limit: number, array: number[]): number {
let sum = 0;
for (const num of map(a => a * a, filter(a => a % 2 === 1, array))) {
sum += num;
if (--limit === 0) break;
}
return sum;
}
The list itself becomes "a list of squared odd numbers," so in the loop, we only need to add them.
Step 3: Replace Count Limitation with take
function* take<T>(limit: number, iterable: Iterable<T>): Generator<T> {
const iterator = iterable[Symbol.iterator]();
while (limit-- > 0) {
const { value, done } = iterator.next();
if (done) break;
yield value;
}
}
function sumOfSquaredOdds(limit: number, array: number[]): number {
let sum = 0;
for (const num of take(limit, map(a => a * a, filter(a => a % 2 === 1, array)))) {
sum += num;
}
return sum;
}
We've even abstracted the break
statement into a function. The time complexity remains the same, but the control structure has become declarative.
Step 4: Replace Summation with reduce
function sumOfSquaredOdds(limit: number, array: number[]): number {
return reduce(
(a, b) => a + b,
0,
take(limit, map(a => a * a, filter(a => a % 2 === 1, array)))
);
}
Step 5: Improve Readability with Chaining
function sumOfSquaredOdds(limit: number, array: number[]): number {
return fx(array)
.filter(a => a % 2 === 1)
.map(a => a * a)
.take(limit)
.reduce((a, b) => a + b, 0);
}
So far, we've transformed a function implemented in imperative code into a functional programming style. The code has become declarative, and readability has greatly improved.
Looking at the result, we've replaced each part of the code with list processing functions, which is essentially creating nested lists.
Concepts Applicable Across Languages and Paradigms
Beyond the JavaScript and TypeScript examples we've looked at, similar mechanisms can be implemented in various languages such as Clojure, Kotlin, Swift, Scala, C#, and Java. Modern languages are actively applying and advancing functional paradigms. Even Java, a traditional object-oriented language, has evolved into a multi-paradigm language with a rich set of higher-order functions.
In modern programming, the boundaries between paradigms are blurring. Imperative programming, object-oriented programming, and functional programming are evolving into completely interchangeable relationships.
Generator:Iterator:LISP - Lazy Evaluation and Safe Composition
Let's expand on the view that Generator, Iterator, and LISP can completely replace each other by implementing higher-order functions like find
, every
, and some
using only list processing function combinations.
Implementing find
The find function iterates through an iterable, returning the first element that evaluates to true, or undefined
if no element satisfies the condition.
type Find = <A>(f: (a: A) => boolean, iterable, Iterable<A>) => A | undefined;
First, let's write the function in imperative code:
function find<A>(f: (a: A) => boolean, iterable: Iterable<A>): A | undefined {
const iterator = iterable[Symbol.iterator]();
while (true) {
const { value, done } = iterator.next();
if (done) {
return undefined;
}
if (f(value)) {
return value;
}
}
}
This code works as follows:
- Create an iterator object to prepare to iterate through the iterable.
- Call the iterator's
next()
method in an infinite loop. - Return the value if
done
istrue
. - Return the value if
f(value)
is true. - Return
undefined
if the loop ends without finding a value that satisfies the condition.
Unlike Array.prototype.filter
which doesn't support lazy evaluation and returns an array containing all elements that evaluate to true, a lazy-evaluation-supporting filter
can be executed only as much as needed. That is, if we evaluate the next()
of an iterator created by a lazy-evaluation-supporting filter
just once, it has the same logic and efficiency as find
.
So, find
can be created using filter
:
const find = <A>(f: (a: A) => boolean, iterable: Iterable<A>): A | undefined => {
const [head] = filter(f, iterable);
return head;
}
To increase code modularity, let's separate the part that finds the head into a separate function:
const head = <A>([a]: Iterable<A>): A | undefined => a;
const find = <A>(f: (a: A) => boolean, iterable: Iterable<A>): A | undefined => {
return head(filter(f, iterable));
}
Finally, let's change it to chaining using the FxIterable
class we created in chapter 2:
const find = <A>(f: (a: A) => boolean, iterable: Iterable<A>): A | undefined =>
fx(iterable)
.filter(f)
.to(head);
All three methods provide the same efficiency as the find
function written in imperative code while making the code more concise.
We've seen how higher-order functions like find
can be implemented with the same behavior and time complexity using combinations of list processing functions, rather than an imperative approach. Code written in each paradigm can completely replace each other and can be mixed and matched. Using multi-paradigm languages, we can choose and combine the most appropriate paradigm for each situation.
Safe Composition in TypeScript
Let's look at ways to handle exception cases for safe composition in TypeScript:
// 1. Optional chaining operator (?.)
const desert = find(({ price }) => price < 2000, deserts);
const price = desert?.price;
// 2. Non-null assertion operator (!)
const desert2 = find(({ price }) => price < Infinity, deserts)!;
const name = desert.name;
Safely Accessing Properties with Optional Chaining
When an object might not exist, you can use the optional chaining operator or nullish coalescing operator to safely access values and provide defaults.
Expressing Intent with Non-null Assertion
If the absence of an object indicates an unintended situation for the developer, it's better not to hide the error but to propagate it. Not all errors should be hidden, as deeply hidden errors can be harder to debug. This approach expresses the developer's intent: "This logic is designed to always have a value."
Therefore, if an error occurs at this point, you should check if the API or DB is wrong, rather than removing the !
.
The every Function
Now let's create a function that returns true
if the given function f
returns true
for all elements, and false
otherwise.
The idea is to convert all elements in the list to boolean
values (using map
) and then connect them all with the &&
operator (using reduce
).
function every<A>(f: (a: A) => boolean, iterable: Iterable<A>): boolean {
return fx(iterable)
.map(f)
.reduce((a, b) => a && b, true); // [a: boolean], [b: boolean]
}
While reduce
is often used as an accumulation function to calculate or merge two values, here we accumulate through logical operators. This method is useful for checking if all elements satisfy a condition.
You might wonder, "Why not process it all at once in reduce
?" But the time complexity is O(n) either way.
Superficially, it looks like the list passes through map
completely and then through reduce
once more, but in reality, each element passes through map
and is then consumed by reduce
, so the time complexity is the same.
Warning
If you use built-in array functions instead of iterator-based functions, it would indeed iterate through the array twice as expected.
The some Function
With a similar idea, we can implement the some
function. Just use the logical operator ||
:
function some<A>(f: (a: A) => boolean, iterable: Iterable<A>): boolean {
return fx(iterable)
.map(f)
.reduce((a, b) => a || b, false);
}
Inserting Break Logic Based on Lazy Evaluation
In fact, both the every
and some
functions don't need to iterate through all elements to produce a result. every
can terminate iteration as soon as it encounters a false
, and some
can terminate as soon as it encounters a true
.
function some<A>(f: (a: A) => boolean, iterable: Iterable<A>): boolean {
return fx(iterable)
.map(f)
.filter(a => a)
.take(1)
.reduce((a, b) => a || b, false);
}
By adding the .filter(a => a).take(1)
part to the some
function, we ensure it stops iterating as soon as it finds a true
.
Abstracting Common Logic in Functions Functionally
Functional programming treats lists, code, and functions as values, making it convenient to separate and abstract common logic. The every
and some
functions have similar code structures, so let's abstract them:
function accumulateWith<A>(
accumulator: (a: boolean, b: boolean) => boolean,
acc: boolean,
taking: (a: boolean) => boolean,
f: (a: A) => boolean,
iterable: Iterable<A>
): boolean {
return fx(iterable)
.map(f)
.filter(taking)
.take(1)
.reduce(accumulator, acc);
}
function every<A>(f: (a: A) => boolean, iterable: Iterable<A>): boolean {
return accumulateWith((a, b) => a && b, true, a => !a, f, iterable);
}
function some<A>(f: (a: A) => boolean, iterable: Iterable<A>): boolean {
return accumulateWith((a, b) => a || b, false, a => a, f, iterable);
}
Since it's already structured to pass functions, we can complete the abstraction with simple modifications. This demonstrates how functional programming is good for refactoring and maintainability.
Adding with concat
The array method concat
is used to combine multiple arrays. arr.concat(arr2)
immediately returns a new array combining arrays arr
and arr2
without modifying the original arrays. However, it evaluates all array elements immediately, which can increase memory usage when combining very large arrays.
Implementing concat
using generators allows for lazy evaluation, processing elements only as needed, potentially improving memory efficiency and performance.
Implementing concat with Generators
function* concat<T>(...iterables: Iterable<T>[]): IterableIterator<T> {
for (const iterable of iterables) yield* iterable;
}
const arr = [1, 2, 3, 4];
const iter = concat(arr, [5]);
console.log([...iter]);
// [1, 2, 3, 4, 5]
The notable point here is that instead of combining the entire arrays at once, it generates elements one by one as it iterates.
Differences Between Array concat and Generator concat
const arr = [1, 2, 3, 4, 5];
// Example using array concat
const arr2 = arr.concat([6, 7, 8, 9, 10]);
console.log(arr2); // [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
let acc = 0;
for (const a of take(7, arr2)) {
acc += a;
}
console.log(acc); // 28
// Example using generator concat
const iter = concat(arr, [6, 7, 8, 9, 10]);
console.log(iter); // concat {<suspended>} (nothing happens)
let acc2 = 0;
for (const a of take(7, iter)) {
acc2 += a;
}
console.log(acc2); // 28
When using the array's concat
method, memory usage increases if large arrays are copied. In contrast, generator concat
works efficiently by generating values only when needed without copying values.
Even though the array's purpose is just to calculate the acc
value, we have to create a new array arr2
. However, generator concat
can perform the operation without copying the array.
Note
If we consider processing speed rather than memory usage, using the array method concat
is much faster.
Thinking About Using concat Instead of unshift
unshift
is a method that adds new elements to the beginning of an array, modifying the original array. Using generator concat
, you can process the task without modifying the original array and using memory efficiently.
The unshift
method can cause overhead as the array size increases because it needs to shift all existing elements' indices backward when adding elements to the front. If there are 100 elements in an array, each time you add an element to the front, you need to move 100 indices.
const arr = ['2', '3', '4', '5'];
const iter = concat(['1'], arr);
console.log(arr); // ['2', '3', '4', '5']
let result2 = '';
for (const str of iter) {
result2 += str;
}
console.log(result2); // '12345'
Conclusion
The equation Code:Object:Function = Generator:Iterator:LISP = IP:OOP:FP means that the boundaries between paradigms are disappearing in modern programming.
Generators implement functional lazy evaluation using imperative syntax, Iterators provide imperative interfaces for object-oriented patterns, and all of this integrates on top of LISP's philosophy that "code is data."
With this mindset:
- Problem-solving is not limited to a specific paradigm
- Code reusability and composability increase
- You can choose the approach optimized for each situation
- You become a developer less dependent on languages and tools
Ultimately, what's important is not adhering to a specific paradigm, but the ability to understand the essence of the problem and combine the most appropriate tools. Modern multi-paradigm languages are designed to maximize this flexibility.