Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 2 3 [4] 5 6 ... 9

Author Topic: Has anyone succesfully generated a very long history?  (Read 59712 times)

Rafal99

  • Bay Watcher
    • View Profile
Re: Has anyone succesfully generated a very long history?
« Reply #45 on: February 17, 2012, 02:07:06 pm »

Well, the strange things is that while worldgen can eat >2Gb of memory while running, the final save of that world is only ~150Mb.

No it is not strange at all.
1. Saves are compressed using some compression library.
2. Saves do not need all the pointers and data structures, as their content is always read or written sequentially. Data in the memory needs those.
3. Saves can store minimal amount of data, which is just enough to restore the full game state on load. But you need the more complete state and all the data structures to actually do any operations on it.
« Last Edit: February 17, 2012, 02:11:22 pm by Rafal99 »
Logged
The spinning Tantrum Spiral strikes The Fortress in the meeting hall!
It explodes in gore!
The Fortress has been struck down.

Caldfir

  • Bay Watcher
    • View Profile
Re: Has anyone succesfully generated a very long history?
« Reply #46 on: February 17, 2012, 03:27:23 pm »

I'm talking about uncompressed saves, and yes, I would expect worldgen to require more memory space than the final save, but this is more than an order-of-magnitude difference.  If this leaves enough information to reconstruct all history, it stands to reason it can be pruned somewhat. 

A factor of 2-4 size would be unsurprising.  10 or more and I start to think something in the code is being a pig.  This is worse when you realize that the savegame size has never been the focus of serious optimization. 

I'm not saying the sizes should be identical, just that the discrepancy is larger than it looks like it should be. 
Logged
where is up?

NW_Kohaku

  • Bay Watcher
  • [ETHIC:SCIENCE_FOR_FUN: REQUIRED]
    • View Profile
Re: Has anyone succesfully generated a very long history?
« Reply #47 on: February 17, 2012, 03:43:08 pm »

Also, as someone had pointed out in another thread, wars absolutely kill framerate, and I think that's because we still have wars that involve each individual combatant being individually generated and individually having duals that functionally follow the same sorts of rules that the rest of the game does, sans movement. 

Boiling this stuff down to something more akin to Total War, where you just have units with average veterancy levels and the like, and you just abstract out war wounds to anyone who had been in that unit and survived, you can make this game run far faster. 

Again, you really don't even need any data until the player actually looks for it.  You can have a blank peasant whose background gets filled in from a vague list of probabilities based upon demographics at the exact moment the player actually starts looking at the guy or asking about his background.
Logged
Personally, I like [DF] because after climbing the damned learning cliff, I'm too elitist to consider not liking it.
"And no Frankenstein-esque body part stitching?"
"Not yet"

Improved Farming
Class Warfare

PTTG??

  • Bay Watcher
  • Kringrus! Babak crulurg tingra!
    • View Profile
    • http://www.nowherepublishing.com
Re: Has anyone succesfully generated a very long history?
« Reply #48 on: February 17, 2012, 06:36:00 pm »

I'm not entirely sure how wars work.

Actually, it'd be neat if the world gen showed battles happening somehow, since they take so much time anyway. Of course, not in an explicit way, but merely showing a little tally of losses for each side as it progresses.
Logged
A thousand million pool balls made from precious metals, covered in beef stock.

NW_Kohaku

  • Bay Watcher
  • [ETHIC:SCIENCE_FOR_FUN: REQUIRED]
    • View Profile
Re: Has anyone succesfully generated a very long history?
« Reply #49 on: February 17, 2012, 07:09:40 pm »

That is, at least, how wars used to work, and while it's entirely possible it's been changed since then, I think Toady's been saving that for Army Arc, which is coming "Soon" (for Valve Time definitions of "soon"). 

That is the reason why you could get those stories like Cryptbrain where single soldiers with thousands of kills - everyone lines up single file, and nobody gets tired.
Logged
Personally, I like [DF] because after climbing the damned learning cliff, I'm too elitist to consider not liking it.
"And no Frankenstein-esque body part stitching?"
"Not yet"

Improved Farming
Class Warfare

knutor

  • Bay Watcher
  • ..to hear the lamentation of the elves!
    • View Profile
Re: Has anyone succesfully generated a very long history?
« Reply #50 on: February 18, 2012, 04:25:49 am »

I've got a couple mature regions so far.  1248 yrs.  Although, I'm having trouble with rejects.  The twelve megabeasts keep dying off young too.  Could be the seeds, dunno.  Gonna spin some more globes and see.  One thing I did notice is a lot more starvation and death.  My word, I've got half a million dead.  And over that expanse of time, just 2k are alive and active.  Many of those, which, are born of unknown lineage.  Doesn't that mean they simply poofed into existence to feed a titan or vampire.

I see many more Gods, than before.  One thing I determined in the last long history world, is if I disable/null to 0, Vampire Curse Types, it even disables all vampires, even those born of the virgin, Helga of Pandemonium.  Another thing I found, is Kobolds are showing up, after year 1000+, they don't live very long, and don't have much to say, but they are there. 

I haven't determined if the number of caverns(mountain and nonmountain) determine Kobold civilizations.  Wish it was a little more clear.  It does appear they eat, as many died of starvation in this region.

Sincerely, Knutor
Logged
"I don't often drink Mead, but when I do... I prefer Dee Eef's.  -The most interesting Dwarf in the World.  Stay thirsty, my friend.
Shark Dentistry, looking in the Raws.

Proteus

  • Bay Watcher
    • View Profile
Re: Has anyone succesfully generated a very long history?
« Reply #51 on: February 18, 2012, 06:45:07 am »

Everybody should have moved to 64bit computing 5 years ago, if you ask me.

Yes, well, there are a lot of things Toady "should have done", but refuses to do.  There is no multi-threading, no redesigned interface, no API so others can make a third-party interface, no severely needed optimization of pathfinding, optimization of fluid motion, etc. etc. etc.

And don't even start trying to talk about adding multiplayer.  There isn't a single-player game's forum I go to for the past couple years that doesn't have at least one nitwit a week talking about how much the game needs to be completely rewritten so he can play it multiplayer.
This one always bothered me. Not every game needs to be multiplayer. The only reason why game companies make nearly everything multiplayer now is so they can push out more copies of the game by forcing it to only play properly through their server.

Hell, yeah,
look at Electronic Arts, who with their newest expansion even tries to put some kind of Multiplayer-Capabilities into such a single player game as Sims 3
(not even real multiplayer but some kind of rabbithole where you put send your Sim to for a short while, which it then "transported" to your friends world and does (computer controlled) perform Gigs and the like ... with other words, the only effect of this "Multiplayer" is (for you) that you have another rabbit hole to send your Sim to, where it spends some time and then returns, and for your friend, that for a short while a new Sim NPC is added to their world (with attributes/looks based on the Sim you sent to him) ... of course with rewards that are only unloicked if you use this "multiplayer feature" ... all this just to get their players to use the network features of Sims 3 more


As for genning a world in DF ...
seems like I was lucky with my first genned world ...
I tried to generate a huge 300 years old world and it finished within 2 hours.
Then (well, 10 hours ago) I got adventurous and tried to make it gen a huge 700 years old world while I slept...
when I awoke ~2 hrs ago the world was in year 338 with 1 year getting calculated every 10 minutes, the proigram (of course) using a full core and 1.5 GB RAM, the world having ~250k living historical figures and 230k dead historical features and getting 20.000 new events being added each year (we are above 5 million events now) .
Unfortunately between the times that a new year has finished calculating the program is totally unresponsive and it doesn´t seem to register any keyboard input, neither at the moment that at new year is displayed, nor at the time between it, so there seems to be no chance to finish the calculation prematurely (giving me a world that is only half as old as I wanted, but at least a new world). So, unfortunately it seems like I have to close the window, with 10 hours of processing time being for naught  :(
Logged

YetAnotherStupidDorf

  • Bay Watcher
    • View Profile
Re: Has anyone succesfully generated a very long history?
« Reply #52 on: February 18, 2012, 09:11:56 am »

With world generation like that, Toady will have severe problems with adding new features and events to it. We do not want 24-h generations on pocket world with 50 year limit (longer crashes game), right?

BTW seems like he should handle window/keyboard request every week, not year... ::)
Logged
Dwarf Fortress - where the primary reason to prevent death of your citizens is that it makes them more annoying then they were in life.

Feb

  • Bay Watcher
    • View Profile
Re: Has anyone succesfully generated a very long history?
« Reply #53 on: February 18, 2012, 10:00:21 am »

Unresponsiveness killed my hope, I managed to gen a world that reached the FOURTEENTH age of myth at year 400 on a medium region world, and I couldn't save it!  (It took an entire day and night to gen too ><)
Logged

dirty foot

  • Bay Watcher
    • View Profile
Re: Has anyone succesfully generated a very long history?
« Reply #54 on: February 19, 2012, 12:45:23 am »

Everybody should have moved to 64bit computing 5 years ago, if you ask me.

Yes, well, there are a lot of things Toady "should have done", but refuses to do.  There is no multi-threading, no redesigned interface, no API so others can make a third-party interface, no severely needed optimization of pathfinding, optimization of fluid motion, etc. etc. etc.

And don't even start trying to talk about adding multiplayer.  There isn't a single-player game's forum I go to for the past couple years that doesn't have at least one nitwit a week talking about how much the game needs to be completely rewritten so he can play it multiplayer.
This one always bothered me. Not every game needs to be multiplayer. The only reason why game companies make nearly everything multiplayer now is so they can push out more copies of the game by forcing it to only play properly through their server.

Hell, yeah,
look at Electronic Arts, who with their newest expansion even tries to put some kind of Multiplayer-Capabilities into such a single player game as Sims 3
(not even real multiplayer but some kind of rabbithole where you put send your Sim to for a short while, which it then "transported" to your friends world and does (computer controlled) perform Gigs and the like ... with other words, the only effect of this "Multiplayer" is (for you) that you have another rabbit hole to send your Sim to, where it spends some time and then returns, and for your friend, that for a short while a new Sim NPC is added to their world (with attributes/looks based on the Sim you sent to him) ... of course with rewards that are only unloicked if you use this "multiplayer feature" ... all this just to get their players to use the network features of Sims 3 more


As for genning a world in DF ...
seems like I was lucky with my first genned world ...
I tried to generate a huge 300 years old world and it finished within 2 hours.
Then (well, 10 hours ago) I got adventurous and tried to make it gen a huge 700 years old world while I slept...
when I awoke ~2 hrs ago the world was in year 338 with 1 year getting calculated every 10 minutes, the proigram (of course) using a full core and 1.5 GB RAM, the world having ~250k living historical figures and 230k dead historical features and getting 20.000 new events being added each year (we are above 5 million events now) .
Unfortunately between the times that a new year has finished calculating the program is totally unresponsive and it doesn´t seem to register any keyboard input, neither at the moment that at new year is displayed, nor at the time between it, so there seems to be no chance to finish the calculation prematurely (giving me a world that is only half as old as I wanted, but at least a new world). So, unfortunately it seems like I have to close the window, with 10 hours of processing time being for naught  :(
I remember when there was a HUGE discovery in bruteforcing by using graphic card GPU instead of the CPU. I wonder if the same technology could be used to optimize world gen. I assume such a design would be far more work than just making DF work with 64-bit systems. While it seems that world gen is hitting a cap for sure, I'm noticing that cap is with my processor and not RAM like people were originally assuming. Maybe the answer really is with multithread functionality, and not with RAM caps.

I honestly don't see either solution being attempted though. Resistance to it is formidable, to say the least.
Logged

telamon

  • Bay Watcher
    • View Profile
Re: Has anyone succesfully generated a very long history?
« Reply #55 on: February 19, 2012, 01:52:56 am »

I'm on an old laptop (i3-350M, last generation 32-bit core-class processor, with 2GB ram) and I've managed to hit large regions of up to 250 years, this took a few hours (regions were designed in perfectworldDF). If i just let it run out to 1000 years, I can usually hit 280-300 years before crashing. I'm not interested in playing any world smaller than 257x257 so yup that's the ceiling I've reached. This was while using the computer... but only lightly, some minor browsing (and I got some excruciating lag at a few points) Faster worldgens would obviously come from a better processor; longer worldgens however will be limited by your computer's RAM.

As for the business of 64-bit programming... you would indeed be able to address more ram (there's a mod in the modding section that's given the game large-address awareness for systems with more than 3GB of ram) but that doesn't get around the obvious problem: most people don't have that much ram. There's no benefit to being able to address ram that does not exist in the first place; switching to 64 bit will not magically expand a 2GB stick of RAM to 4GB. Modern consumer-grade systems stop at 2-4 GB and few systems will go above 16. Beyond that you're looking at server motherboards if you want to handle more; most RAM sticks of that size require ECC support (a type of RAM error checking that only servers have - no consumer board I know of supports ECC). Server components are far more expensive than consumer parts so that's not a reasonable option. A modification of this type would benefit a minimal number of people, for a VERY significant time investment. (sorry to those of you who really are overflowing with RAM.... but honestly nothing these days can consume that much RAM anyway, so was there really a point in buying that much? =P)

I find it hilarious how people refer to the idea of "rewriting the code for 64 bit" as if this is some small or reasonably sized task. DF is no diminutive game. You're talking about rewriting an entire game, and it's not exactly a young game either. Refactoring the entire code base will take a long time. If you're wondering why Netscape is no longer a relevant browser, it's because their team decided to rebuild their code from the ground up, and meanwhile the rest of the browser world kept growing. You're talking about throwing away an entire program and rewriting it from scratch. It's also been suggested several times that Toady open up the code for other people to help; as other threads have pointed out, drop it. It's not happening. What do you think the result would be if Toady rewrote the code, at ANY point in the development? You'd have to spend several months waiting for a release of the same stuff you already had before, except now in shiny sexy 64 bit. Oh wait, but it's not playable, because there's been a whole new crop of bugs in the process of writing new code from the ground up. Let's wait another few months while Toady ferrets them all out for us. Oh and now we have to wait for all our favorite utilities to catch up and find the new offsets. And what will we all be doing in the meantime? I guess we'll have to play the OLD version of DF that was still in 32 bit. Poor us. In the time lost, Toady could have been making useful, visible progress on another arc of his development plan. Yes, 32 bit progress. Poor us. Are you going to throw your dwarves off a cliff in outrage?

As for this GPU business: what you're thinking of is basically a form of hardware acceleration, some GPU-compute functionality where computational workload is put onto the GPU. The problem with this idea is that graphics units have their own APIs (basically the system of functions and code designs that you need to know, to invoke and control those units). nvidia has a fairly mature code base in the form of CUDA, their GPU-compute library; this is the most mature solution out there right now but we also have some competitors like OpenCL which AMD/ATI are supporting. There are a few issues with trying to extend DF in this direction, however: one, most consumers don't even have graphics cards (the run of the mill consumer won't even know what a discrete graphics card is). Two, most consumer-level graphics cards don't have advanced GPU-compute abilities. Ever heard of nvidia's Fermi lineup? It's their set of professional graphics cards designed for providing computational acceleration on GPU, and they're exactly that: professional, with matching price tags. Your average GeForce or radeon card is not well suited for this task. And three, the most important: you need to basically learn an entire new language to write a GPU-compute application. You have to learn the API that the card interfaces to so that your code can control it; you can't write it in the same format as the rest of your code. The underlying programming language might be the same, but a whole new infrastructure of functions and parts is needed. EDIT: oh and just in case that wasn't bad enough, nvidia doesn't currently support openCL, and meanwhile CUDA is a proprietary platform. so people with AMD cards would be unable to make use of any accelerations in CUDA code, unless Toady learned ANOTHER API, and rewrote the gpu accelerations AGAIN in ANOTHER form. that's software standards for you - every time someone attempts to create a new "unified" standard over two or more existing ones, it fails and you end up with a third standard in the mix.
The advantage of course is that GPUs are inherently good at parallel processing; with hundreds of shader cores onboard, you have plenty of parallel muscle to use for calculating several things at once. That's why it's useful for stuff like brute force password cracking. However those shader cores suffer from relatively shallow pipelines and clumsy instruction sets; these things took a long time to clean up and that's where we get libraries like CUDA and OpenCL. Things that would be useful.... but that Toady would have to learn, basically from the ground up, to make use of any such functionality.

and for the people interested in multiplayer: http://kotaku.com/5886151/why-skyrim-and-arkham-city-dont-have-multiplayer

EDIT: oh and another thing regarding multithreading. Let's just overlook the monumental task of reorganizing the current code base into a multithreaded application; let's pretend that's not even a problem (when of course, in reality it would be a very big job. quite convenient for the end user to overlook the hard work of their software developers, isn't it?). As someone else brought up (apologies to the person who did observe this), you're not going to gain much. Multithreading is useful when an application has numerous tasks it can perform in parallel, ie they aren't interreliant enough that they need to be done serially (in order, one by one). To some extent, DF might be able to benefit from such parallelization. However, ultimately DF needs to calculate things in order. Everything is connected to everything else, so trying to find natural points along which to split the computation is going to be exceptionally difficult. Events in DF are so intricately intertwined that being able to calculate them separately would require exceptional amounts of concurrency control (which basically refers to the way programs manage resources so that they don't fall over one another when two concurrent processes access something at the same time), and ultimately the benefit won't be great. Is there some room for improvement in this regard? yeah, probably. But in the long run? not so sure, and moreover, what of the work that would have to be done to make this a possibility? That code doesn't write itself.
« Last Edit: February 19, 2012, 02:23:20 am by telamon »
Logged
Playing DF on Windows 98 since.... ?
At 55 frames per minute.

dirty foot

  • Bay Watcher
    • View Profile
Re: Has anyone succesfully generated a very long history?
« Reply #56 on: February 19, 2012, 02:31:27 am »

That was very informative Telamon, thank you for taking the time to share that. I honestly didn't expect using CUDA as a viable option, I was merely spitballing the thought because I really didn't know all that much about it, but I heard the processing capacity was obscene.

I do kind of stick to my guns on rewriting in 64-bit. I don't know about anyone else, but I'm willing to wait for more content if Toady were to work on such a thing. I realize that we can all just lower our standards and make smaller worlds, but I feel like more content patches are just going to equate to smaller world gens each major patch without more wiggle room. From my (albeit novice) math, making a switch to 64-bit is a solid investment, and likely a better idea earlier rather than later in the game's creation.

How do you feel about multi-core functionality?
Logged

Urist McDepravity

  • Bay Watcher
    • View Profile
Re: Has anyone succesfully generated a very long history?
« Reply #57 on: February 19, 2012, 02:43:56 am »

I find it hilarious how people refer to the idea of "rewriting the code for 64 bit" as if this is some small or reasonably sized task. DF is no diminutive game. You're talking about rewriting an entire game, and it's not exactly a young game either. Refactoring the entire code base will take a long time
Its not as bad as "rewriting all the code". If you was sane with mallocs and other manual pointers/address space handling, switching to 64bits is very straightforward and easy - mostly just changing compiler settings and throwing some macros around.
Logged

telamon

  • Bay Watcher
    • View Profile
Re: Has anyone succesfully generated a very long history?
« Reply #58 on: February 19, 2012, 03:00:47 am »

me?

in all truthfulness, I'm not qualified to say. Because I consider software engineering my ultimate career, I try to stay informed =P but really, the effectiveness of multithreading for a program like DF will depend a lot on the architecture of the code and how forgiving it is to parallelization. Let me provide an intensive example. I'm prone to writing walls of text so you can skip this, probably.

Spoiler (click to show/hide)

What you can basically see happening here is that, as the system grows more and more complicated - or even if we ignored the increases in theoretical complexity and simply scaled up from 3 tiles to 4, or 5 ,or 6, or 2000 - is that the returns we get from multithreading are diminishing. The work we have to do to set up an efficient multithreaded system and find the independent calculations grows as the system becomes more complex, but the benefits from threading off the calculations that are independent are not growing as fast. With each increase in the system's size or depth of calculation, it takes more and more work for us to find out what calculations are threadable, but the number of calculations that are threadable (and the more independent calculations there are, the more things we can do simultaneously, for greater time savings) are not increasing. The problem is that ultimately, the system has to recombine somewhere - we can't do EVERYTHING in parallel, eventually we have to stop, put all the calculations together, and do a few things one after another before we can start splitting off again, and to start splitting off we have to do work to figure out what stuff we can split off, and so on and so on. The end result is that the more complicated the system gets, the less and less useful multithreading will become.

There are ways around this, of course. The problem we faced was that as our system became more complex, it became complex in ways that were interdependent; ie the complications weren't making the system MORE parallel. I'm sure that if we spent a few months drilling through Toady's code, perhaps we could find optimizations that would streamline these mechanical issues. But ultimately the result is still the same. The entire game state has to come together in one place before we can move to the next tick. No matter how we try, we can't handle parts of the fortress separately between ticks and keep them on permanently separated threads; always we must reconverge, then reseparate, converge, separate. And the programming overhead of doing so would dig deep into the benefits of multithreading at all.

The solution, as kohaku has been saying, does not lie in boosting the processing power at the program's disposal. For every Moore's law there is a software developer striving to overwhelm it (Moore's law states that the number of transistors on a processing chip will double every 18 months; it's been proven statistically in history, but of course code is becoming more complicated way faster than that). The solution is really about finding more clever ways to use the processing power we already have, and that's something Toady has already taken into account. There was already an example on this thread about how army battles are handled as individual unit vs individual unit. Well a whole unit is quite a complicated entity from a memory and code standpoint. Ultimately we hope to be able to streamline battles so that a unit is just a single number in the sea, so that even under the same computing constraints, our programming improves, and we can handle bigger battles. I'm pretty sure this is the kind of stuff that Toady has lying around on the army arc.

tl;dr multithreading could have benefits, but i suspect the processing needed to facilitate them will counterbalance or outweigh any gains. true optimization lies not in improving the amount of power available to your program, but improving the way your program uses the power it has. I think those optimizations are things Toady has already planned.

EDIT:
Quote
Its not as bad as "rewriting all the code". If you was sane with mallocs and other manual pointers/address space handling, switching to 64bits is very straightforward and easy - mostly just changing compiler settings and throwing some macros around.
Fair enough, but that still ignores the ripple effect of releasing a new compile. bugs, offset shifts and plain old sh!t happens are bound to occur somewhere, so even if a 64-bit release comes out in a month, the player base will probably take 2-3 months or more to start using such a release. I'd rather the sh!t happened in the process of improving a play mechanic in the game. Besides, it still doesn't solve the other problem I mentioned: switching from one address size to another doesn't instantiate memory. If there was memory waiting to be used, it might unlock it, but the physical limitation of the computer is still there. The benefits of moving to a new address system will be slim when the memory limitation of the user still exists.

Consider my world constraints right now. I top out at around 300yr on 257x257 worlds. As the world grows older, its complexity grows as well - and probably geometrically, if not faster. In other words, if I doubled the age of a world, the memory space it occupies is probably greater than the square of the previous age. Let's suppose I had 4GB of RAM and DF just couldn't use it because it was 32bit atm. So now I double my RAM. But if the 300year world is taking up all my RAM (ie 2GB), a 600yr world is probably gonna take up over 4GB. ofc I'm just guessing here as to the memory growth rate of a world, but my point is that being able to address an extra gigabyte or so of RAM is really not going to add much to the history of a world. If a world of x years takes up 3GB, 2x years would probably be more than 9GB, etc etc. 1GB starts looking pretty small.
« Last Edit: February 19, 2012, 03:13:56 am by telamon »
Logged
Playing DF on Windows 98 since.... ?
At 55 frames per minute.

Urist McDepravity

  • Bay Watcher
    • View Profile
Re: Has anyone succesfully generated a very long history?
« Reply #59 on: February 19, 2012, 03:21:21 am »

Fair enough, but that still ignores the ripple effect of releasing a new compile. bugs, offset shifts and plain old sh!t happens are bound to occur somewhere, so even if a 64-bit release comes out in a month, the player base will probably take 2-3 months or more to start using such a release. I'd rather the sh!t happened in the process of improving a play mechanic in the game. Besides, it still doesn't solve the other problem I mentioned: switching from one address size to another doesn't instantiate memory. If there was memory waiting to be used, it might unlock it, but the physical limitation of the computer is still there. The benefits of moving to a new address system will be slim when the memory limitation of the user still exists.
You forget about cache. MongoDB successfully uses hundreds of GB of RAM w/out having that much RAM available, letting OS put off unneeded data on the disk itself.
I doubt that game actually uses millions of events it already generated in further calculations, so it wouldn't hurt performance if OS would unload 'em on the disk.
Difference between 32bit and 64bit is in fact that 32bit version _crashes_ upon hitting 2GB limit, while 64bit _slows down but keeps working_ after hitting "all RAM machine has" limit.
Logged
Pages: 1 2 3 [4] 5 6 ... 9