Candy Crush is the kind of game R.J. Reynolds would make if they were peddling entertainment rather than cancer. The game itself is visually appealing, fun to play, and very addictive. It’s that last element that really adds the sleaze to the coercive monetization King Digital Entertainment uses to generate income.
I’m one of the 70% that’s never spent money on the game and never intend to, but was I tempted to pry open my wallet the other day when I saw this while playing:
Live forever! You mean I can pay something reasonable like $10 or $15 and just play the game continuously? I don’t have to wait or spend money to continue playing each and every time I run out of lives? That’s something I’d be willing to buy!
Oh, wait. By forever you mean the next two hours, not for all time. So if I lose lives at a rate of one each minute, how many “unlimited” lives can I get?
Let’s see… 117. 118. 119. Infinity. That adds up.
No thanks, I’ll spend my money elsewhere.
Just say no to coercive monetization. Play but don’t pay. Develop patience or spend your money on games that charge you upfront and allow you to play continuously.
Monday, May 19, 2014
Sunday, May 11, 2014
Remembering Mom 2014
Driving mom to the airport:
MOM: I read that you shouldn’t put your address on your luggage tags. Thieves will know that you’re not at home and can rob your house while you’re gone.Happy Mother’s Day Mom.
ME: So who’s address did you use?
MOM: Yours.
Thursday, May 8, 2014
River Benchmark
I came across another river crossing problem that’s similar to the farmer’s dilemma example problem for CLIPS. It’s a little bit more complex by virtue of the number of things that have to be moved, but it’s essentially the same type of problem.
One of the issues with existing benchmarks such as waltz and manners is validation. The manners benchmark only runs in CLIPS with the depth conflict resolution strategy and the waltz benchmark executes different numbers of rules depending upon the conflict resolution strategy chosen.
Clearly this is an issue. I'm a proponent of having lots of benchmarks, rather than one or two, but in order to have lots of benchmarks, they need to be dead simple to translate and verify. In the case of the existing benchmarks, they're not.
What this means is that you have to design the benchmarks with this is mind. I thought it would be an interesting exercise to demonstrate how this can be done. So I wrote a CLIPS program to solve a variation of the river crossing problem and once I had it working, set the conflict resolution strategy to random to see if the same number of rules were executed with each run. There weren’t.
It took several iterations before I had a version that produced the same number of rule executions regardless of the order in which rules of the same priority/salience were placed on the agenda. The primary mechanism used to get an exact number of rules executed was to assign weights to each of the possible moves that could be made so that the search was always made in the same order. In total, there were 19 rules and 5 salience values for the different groups of rules. If I’d used modules (which would have made translation to other languages more difficult), there wouldn’t have been any need for salience values at all.
Like the manners and waltz programs, the river program runs considerably faster in version 6.3 of CLIPS than in version 6.24. On a 2.66 GHz Intel Core 2 Duo it completes in 0.7 seconds in the newer version as opposed to 29 seconds in the older version.
The river program is available here.
One of the issues with existing benchmarks such as waltz and manners is validation. The manners benchmark only runs in CLIPS with the depth conflict resolution strategy and the waltz benchmark executes different numbers of rules depending upon the conflict resolution strategy chosen.
Clearly this is an issue. I'm a proponent of having lots of benchmarks, rather than one or two, but in order to have lots of benchmarks, they need to be dead simple to translate and verify. In the case of the existing benchmarks, they're not.
What this means is that you have to design the benchmarks with this is mind. I thought it would be an interesting exercise to demonstrate how this can be done. So I wrote a CLIPS program to solve a variation of the river crossing problem and once I had it working, set the conflict resolution strategy to random to see if the same number of rules were executed with each run. There weren’t.
It took several iterations before I had a version that produced the same number of rule executions regardless of the order in which rules of the same priority/salience were placed on the agenda. The primary mechanism used to get an exact number of rules executed was to assign weights to each of the possible moves that could be made so that the search was always made in the same order. In total, there were 19 rules and 5 salience values for the different groups of rules. If I’d used modules (which would have made translation to other languages more difficult), there wouldn’t have been any need for salience values at all.
Like the manners and waltz programs, the river program runs considerably faster in version 6.3 of CLIPS than in version 6.24. On a 2.66 GHz Intel Core 2 Duo it completes in 0.7 seconds in the newer version as opposed to 29 seconds in the older version.
The river program is available here.
Subscribe to:
Posts (Atom)