scala - How to maintain an immutable list when you impact object linked to each other into this list -


i'm trying code fast non dominated sorting algorithm (nds) of deb used in nsga2 in immutable way using scala.

enter image description here

but problem seems more difficult think, simplify here problem make mwe.

imagine population of seq[a], , each a element decorateda list contains pointers other elements of population seq[a].

a function evala(a:decorateda) take list of linkeda contains, , decrement value of each.

next take subset list decoratedapopulation of population a, , call evala on each. have problem, because between each iteration on element on subset list decoratedapopulation, need update population of a new decorateda , new updated linkeda contain ...

more problematic, each element of population need update of 'linkeda' replace linked element if change ...

enter image description here

hum can see, seem complicated maintain linked list synchronized in way. propose solution bottom, need recursion return after each evala new population element replaced.

enter image description here

how can correctly in immutable way ?

it's easy code in mutable way, don't find way in immutable way, have path or idea ?

object test extends app{    case class a(value:int) {def decrement()= new a(value - 1)}    case class decorateda(oneadecorated:a, listoflinkeda:seq[a])    // start algorithm loop element value = 0   val population = seq(new a(0), new a(0), new a(8), new a(1))    val decoratedapopulation = seq(new decorateda(population(1),seq(population(2),population(3))),                                  new decorateda(population(2),seq(population(1),population(3))))   def evala(a:decorateda) = {     val newlistoflinked = a.listoflinkeda.map{e => e.decrement()     new decorateda(a.oneadecorated,newlistoflinked)}   }    def run()= {     //decoratedapopulation.map{     // ?     //}   } }  

update 1:

about input / output of initial algorithm.

the first part of deb algorithm (step 1 step 3) analyse list of individual, , compute each : (a) domination count, number of dominate me (the value attribute of a) (b) list of dominate (listoflinkeda).

so return population of decorateda totally initialized, , entry of step 4 (my problem) take first non dominated front, cf. subset of elements of decorateda a value = 0.

my problem start here, list of decorateda a value = 0; , search next front list computing each listoflinkeda of each of a

at each iteration between step 4 step 6, need compute new b subset list of decorateda a value = 0. each , decrementing first domination count attribute of each element listoflinkeda, filter element equal 0. end of step 6, b saved list list[seq[decorateda]], restart step 4 b, , compute new c, etc.

something in code, call explore() each element of b, q equal @ end new subset of decorateda value (fitness here) = 0 :

case class populationelement(popelement:seq[double]){   implicit def poptodouble():seq[double] = {     popelement   } }  class solutionelement(values: populationelement, fitness:double, dominates: seq[solutionelement])  {    def decrement()= if (fitness == 0) else new solutionelement(values,fitness - 1, dominates)    def explore(q:seq[solutionelement]):(solutionelement, seq[solutionelement])={     // return dominates elements fitness - 1     val newsolutionset = dominates.map{_.decrement()}     val filteredsolution:seq[solutionelement] = newsolutionset.filter{s => s.fitness == 0.0}.diff{q}     filteredsolution    } } 

a end of algorithm, have final list of seq of decorateda list[seq[decorateda]] contain fronts computed.

update 2

a sample of value extracted example. take pareto front (red) , {f,h,l} next front dominated count = 1.

enter image description here

case class p(x: int, y: int)  val = a(p(3.5, 1.0),0)  val b = a(p(3.0, 1.5),0)  val c = a(p(2.0, 2.0),0)  val d = a(p(1.0, 3.0),0)  val e = a(p(0.5, 4.0),0)  val f = a(p(0.5, 4.5),1)  val h = a(p(1.5, 3.5),1)  val l = a(p(4.5, 1.0),1)   case class a(xy:p, value:int) {def decrement()= new a(xy, value - 1)}  case class aroot(node:a, children:seq[a])  val population = seq(   aroot(a,seq(f,h,l),   aroot(b,seq(f,h,l)),   aroot(c,seq(f,h,l)),      aroot(d,seq(f,h,l)),   aroot(e,seq(f,h,l)),   aroot(f,nil),   aroot(h,nil),   aroot(l,nil)) 

algorithm return list(list(a,b,c,d,e), list(f,h,l))

update 3

after 2 hour, , pattern matching problems (ahum...) i'm comming complete example compute automaticaly dominated counter, , children of each aroot.

but have same problem, children list computation not totally correct, because each element possibly shared member of aroot children list, need think answer modify :/ @ time compute children list of seq[p], , need list of seq[a]

  case class p(x: double, y: double){   def toseq():seq[double] = seq(x,y) }  case class a(xy:p, dominatedcounter:int) {def decrement()= new a(xy, dominatedcounter - 1)}  case class aroot(node:a, children:seq[a]) case class arootraw(node:a, children:seq[p])  object test_stackoverflow extends app {    val = new p(3.5, 1.0)   val b = new p(3.0, 1.5)   val c = new p(2.0, 2.0)   val d = new p(1.0, 3.0)   val e = new p(0.5, 4.0)   val f = new p(0.5, 4.5)   val g = new p(1.5, 4.5)   val h = new p(1.5, 3.5)   val = new p(2.0, 3.5)   val j = new p(2.5, 3.0)   val k = new p(3.5, 2.0)   val l = new p(4.5, 1.0)   val m = new p(4.5, 2.5)   val n = new p(4.0, 4.0)   val o = new p(3.0, 4.0)   val p = new p(5.0, 4.5)    def isstriclydominated(p1: p, p2: p): boolean = {     (p1.toseq zip p2.toseq).exists { case (g1, g2) => g1 < g2 }   }    def sortedbyrank(population: seq[p]) = {      def paretoranking(values: set[p]) = {       //comment @dk14: suppose order of values isn't matter here, otherwise use sortedset        values.map { v1 =>         val t = (values - v1).filter(isstriclydominated(v1, _)).toseq         val = new a(v1, values.size - t.size - 1)         val root = new arootraw(a, t)         println("root value ", root)         root       }     }      val listofarootraw = paretoranking(population.toset)     //from @dk14: here convertion seq[p] seq[a]     val dominations: map[p, int] = listofarootraw.map(a => a.node.xy -> a.node.dominatedcounter) //from @dk14: it's map dominatedcounter each point     val listofaroot = listofarootraw.map(raw =>  aroot(raw.node, raw.children.map(p => a(p, dominations.getorelse(p, 0)))))      listofaroot.groupby(_.node.dominatedcounter)    }    //get first front, subset of aroot, , start step 4   println(sortedbyrank(seq(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p)).head)  } 

talking problem distinguishing fronts (after update 2):

val (left,right) = population.partition(_.node.value == 0) list(left, right.map(_.copy(node = node.copy(value = node.value - 1)))) 

no need mutating here. copy copy fields specified new values. talking code, new copy linked same list of children, new value = value - 1.

p.s. have feeling may want this:

case class a(id: string, level: int) val = a("a", 1) val b = a("b", 2) val c = a("c", 2) val d = a("d", 3)  clusterize(list(a,b,c,d)) === list(list(a), list(b,c), list(d)) 

it's simple implement:

def clusterize(list: list[a]) =     list.groupby(_.level).tolist.sortby(_._1).map(_._2) 

test:

scala> clusterize(list(a("a", 1), a("b", 2), a("c", 2), a("d", 3))) res2: list[list[a]] = list(list(a(a,1)), list(a(b,2), a(c,2)), list(a(d,3))) 

p.s.2. please consider better naming conventions, here.


talking "mutating" elements in complex structure:

the idea of "immutable mutating" shared (between parts of structure) value separate "mutation" structure. or saying, divide , conquerror:

  1. calculate changes in advance
  2. apply them

the code:

 case class a(v: int)  case class aa(a: a, seq: seq[a]) //decorateda   def update(input: seq[aa]) = {     //shows how decrement each value wherever is:     val stats = input.map(_.a).groupby(identity).mapvalues(_.size) //domination count each     def upd(a: a) = a(a.v - stats.getorelse(a, 0)) //apply decrement     input.map(aa => aa.copy(aa = aa.seq.map(upd))) //traverse , "update" original structure  } 

so, i've introduced new map[a, int] structure, shows how modify original one. approach based on highly simplified version of applicative functor concept. in general case, should map[a, => a] or map[k, => b] or map[k, zipper[a] => b] applicative functor (input <*> map). *zipper (see 1, 2) give information current element's context.

notes:

  • i assumed as same value same; that's default behaviour case classess, otherwise need provide additional id's (or redefine hashcode/equals).

  • if need more levels - aa(aa(aa(...)))) - make stats , upd recursive, if dеcrement's weight depends on nesting level - add nesting level parameter recursive function.

  • if decrement depends on parent node (like decrement a(3)'s, belongs a(3)) - add parent node(s) part of stats's key , analise during upd.

  • if there dependency between stats calculation (how decrement) of let's input(1) input(0) - should use foldleft partial stats accumulator: val stats = input.foldleft(map[a, int]())((partialstats, elem) => partialstats ++ analize(partialstats, elem))

btw, takes o(n) here (linear memory , cpu usage)

example:

scala> val population = seq(a(3), a(6), a(8), a(3)) population: seq[a] = list(a(3), a(6), a(8), a(3))  scala> val input = seq(aa(population(1),seq(population(2),population(3))), aa(population(2),seq(population(1),population(3)))) input: seq[aa] = list(aa(a(6),list(a(8), a(3))), aa(a(8),list(a(6), a(3))))  scala> update(input) res34: seq[aa] = list(aa(a(5),list(a(7), a(3))), aa(a(7),list(a(5), a(3)))) 

Popular posts from this blog