T O P

  • By -

beders

In Clojure it is called Transducers. It's composable reducers. https://clojure.org/reference/transducers


78yoni78

In a lot of languages I’ve seen different names for it, lazy lists, streams, generators, generally things which allow you to have infinite size lists work lazily and will allow you to iterate only when reading, but I’m not sure how you do this in scheme :9


MadocComadrin

It's streams in Scheme.


Pierce_B_Architect

would say that SRFI-41 provides that?


[deleted]

That's a classic question, solution to which are different, starting from compiler optimization and finishing by using transducers. For example, in popular JS FP library (ramda.js) most of list functions can be automatically used as transducers. Some good article I saved on them: [https://www.jeremydaly.com/transducers-supercharge-functional-javascript/](https://www.jeremydaly.com/transducers-supercharge-functional-javascript/) Ramda docs: [https://ramdajs.com/docs/#transduce](https://ramdajs.com/docs/#transduce) Any list iterator can be used as transducer: [https://ramdajs.com/docs/#map](https://ramdajs.com/docs/#map) Now, most of the time you don't really need them. Don't try to overoptimize before you actually have bottlenecks.


Pierce_B_Architect

You mean that most times the list is small enough to do 2 loops without much impact? Because I really like the clarity of first filtering and then mapping instead writing out some for-loop


[deleted]

Exactly. Unless you’re working with some crazy big data 2 loops shouldn’t make a big impact


MadocComadrin

Someone else pointed out laziness as a possibility. For strict evaluation, you can combine the map and filter into one step using a right fold. The function you feed into the fold should take an element and a list and if the element satisfies your predicate, apply the function you were supplying to map on it and cons it to the list argument. The default argument should be the empty list. E.g. (map f (filter p L)) ≡ (fold-right (lambda (x R) (if (p x) (cons (f x) R) R)) '() L) If you want to map before filtering, you do the same thing, but apply the function you'd give to the map before trying the predicate.


mesonofgib

As well as lazy sequences being a solution to this problem, most functional languages or libraries have an equivalent to a `partialMap` function (although in F# it's called `choose`, for some reason) that allows you to walk a sequence once doing a map and a filter at the same time.


gabedamien

For the specific case of mapping + filtering, a combined `filterMap`-style function can make sense. In Haskell this is the function `mapMaybe :: (a -> Maybe b) -> [a] -> [b]` which requires your mapping function to return a _maybe_-transformed value (i.e. either `Just transformed` or `Nothing`). Whichever source elements get turned into `Nothing` are filtered out, whichever are transformed into `Just transformed` get included (and have been mapped). https://hackage.haskell.org/package/base-4.19.0.0/docs/Data-Maybe.html#v:mapMaybe ``` mapMaybe :: (a -> Maybe b) -> [a] -> [b] mapMaybe _ [] = [] mapMaybe f (x:xs) = let rs = mapMaybe f xs in case f x of Nothing -> rs Just r -> r:rs ```


sohang-3112

In Haskell, the GHC compiler automatically optimizes it in most cases to loop only once - it's called stream fusion.


mckahz

I know this isn't answering your question, but there really isn't a problem with looping over stuff twice. If you're coding in a functional style you will use code which iterates over the same list a bunch of times. Functional programming is good for architectural reasons (although parallelism can make it good for performance reasons too) but especially if you're learning about it I would put those concerns aside and just try to do stuff. If you're using Scheme it might be good to read the first 300~ pages of the Structure and Interpretation of Computer Programs (aka SICP or The Wizard Book). It's a bit dense and very academic but it's also super interesting and it has a bunch of exercises that will get you acquainted with FP. IMO it demonstrates the conceptual simplicity of FP very well, especially when juxtaposed to imperative programming. Namely that creating a binding (using let) is conceptually identical to applying a function when you don't use mutation, with the caveat that recursive functions don't fit into this model. The Y combinator solves that and it's derivation is fascinating although I'm not sure that's covered in SICP.


Pierce_B_Architect

It actually helps answering the question, thanks :). A lot of answers in this thread suggest specific constructions/libraries which means that it is not a basic language feature I missed. Yes maybe I should look more into SICP if that helps understanding FP way of designing software.


mckahz

I'm not sure it helps understanding how to design software, for that I'd recommend Elm- or more specifically "The life of a file" by Evan Czaplicki. It's more that SICP helps you understand the language, what it's capable of, how to do some basic things and the way in which we talk about languages.