Talk:Bélády's anomaly

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Unnamed section[edit]

Should link to Lazlo Belady ?

The table in the article gets a little confusing. It appears from the table that the pages gets "moved" to a different frame for each consequtive timing, especially so when you labelled "Frame" at the side. I believe you are illustrating the FIFO queue data structure such that the head of the queue is the 'oldest' page and the tail of the queue the 'youngest' page.

By Phail_Saph,

The first chart is wrong. It will still lead to 10 interrupts. If you want an example that is correct the previous two charts in the history tab are correct.


I don't understand the table at all. cagliost 15:47, 19 May 2007 (UTC)[reply]


I don't understand the table me neither. It needs reconstruction considering an other visualization method for the memory frames —Preceding unsigned comment added by Teohaik (talkcontribs) 19:28, 12 January 2008 (UTC)[reply]

previously believed[edit]

 Previously, it was believed that an increase in the number of page frames
 would always provide the same number or fewer page faults.

i dont get the "fact" (?). the more data you have to hit the bigger is the chance you miss. eg smaller the chance you hit the stuff. its probability. it's an uproved claim and hard to believe one. Xchmelmilos (talk) 18:51, 24 April 2008 (UTC) for it makes no sense i have deleted it and instead of "he stated" i placed "proved". i don't wanna be an azz here. so please fixme if it is wrong. thought is bloody hard to believe someone believed that. Xchmelmilos (talk) 19:05, 24 April 2008 (UTC)[reply]

I don't really understand what you're trying to say. This certainly has nothing to do with probability; it's about carefully constructing relatively improbable requests to make bigger caches fail more often. And the "previously believed" bit does make sense, Belady's anomaly isn't exactly obvious. .froth. (talk) 00:58, 4 June 2010 (UTC)[reply]
I agree, the anomaly is far from obvious. The point is that for the same data set that needs to be used, you can get more page faults if you use more pages. This is quite weird, as you would expect to have to do less page replacement etc, resulting in fewer faults. If I understand your comment correctly, you (Xchmelmilos) seem to be referring to getting more misses when you have more data to fetch. That is obvious, but nothing to do with belady's anomaly, as far as I know. Correct me if I'm wrong. Chaos.squirrel (talk) 06:25, 23 June 2010 (UTC)[reply]