dragonshardz Posted October 25, 2012 Posted October 25, 2012 im currently stuck with an 82945G chipset but soon provided the 8970 doesnt come out before then ill have a nice shiny Gigantic image Sapphire 7970 3GB GHz edition with the Vapor X cooler Holy gigantic image, Batman!
hoho Posted October 26, 2012 Posted October 26, 2012 GTX 560 here so GL4-level. I'd say ditch anything <GL 3.0, it's just not worth the effort. Unless you have tons of extra time to waste instead of doing something useful multi GPU is a lot different than multi CPU, in terms of programming required to make it work. most of the work is done on the driver, by splitting the load between the cards and making each one only render every other frame. the program in many cases doesn't even have to know it's happening.Yeah, it kind of works automatically but as long as you are not writing games using 10-year old technologies (no offscreen rendering) scaling to >1 GPUs WILL suck horribly. Unless the ONLY thing you do is crank up anti-aliasing without changing anything else. And yes, they kind of try to keep v-ram synchronized and it's physically not possible to pool either the GPU processing or RAM anyway when GPUs have vram access bandwidth in hundreds of GB's/s while buses are just a tiny fraction of that. That's why you need custom-coding to make multi GPU worth a damn.
jakj Posted October 26, 2012 Author Posted October 26, 2012 I had to stick with 2.1 for now to not leave out too many. This thread alone had three such responses.
Smurfkiller9000 Posted November 1, 2012 Posted November 1, 2012 I have Intel Graphics 3000, it's crap isn't it?
jakj Posted November 1, 2012 Author Posted November 1, 2012 Intel integrated graphics would tend to be crap when it comes to advanced OpenGL support, yes, because Intel and Microsoft are bosom-buddies and Intel supports as little of GL as it can without starting a shitfest. It's not that Intel makes bad hardware, because it doesn't: It just doesn't give two fucks about anybody but Microsoft and Windows, meaning they have terrible support in the Linux kernel (there being more frustrating bugs, glitches, crashes, and panics because of Intel graphics chipsets than because of any other).
metalchic Posted November 1, 2012 Posted November 1, 2012 from a bit of searching around its my understanding that vannila minecraft requires OpenGL 1.1 to just run, to this end i had it running on a Pentium 4 HT 3.2GHz with a GeForce4 Ti 4600 8x and am considering setting that computer back up so i can go for the Rage 128 Pro. got about 30 fps with technic under opensuse 11 i belive i was using the legacy non-oss driver. also it ran world of warcraft. no it did not make toast.
jakj Posted November 1, 2012 Author Posted November 1, 2012 OpenGL 1.1 has a software-emulation mode, so you can run it even without a GPU at all.
okamikk Posted November 1, 2012 Posted November 1, 2012 OpenGL 1.1 has a software-emulation mode, so you can run it even without a GPU at all. that's.... wow, no wonder my old computer nearly shat itself whilst playing vanilla minecraft...
metalchic Posted November 1, 2012 Posted November 1, 2012 that would definitely allow it to run on the rage 128 then... but then why does minecraft run so well on the Pentium 4 but not so well on the Pentium T2060 in my laptop? unless the GMA950 supports just enough opengl to dick it up, but also then where does the supposed v1.5 opengl the geforce4 ti supports stand?
jakj Posted November 1, 2012 Author Posted November 1, 2012 Minecraft is singly-threaded for the most part, and is system-bus bound in terms of rendering, so the two biggest factors in Minecraft speed are 1) how fast ONE CORE of your processor is, and 2) the width (bits per clock tick) and frequency (clock ticks per second) of your system bus. Since Minecraft uses only GL 1.1 and no shaders, the shittiest 3D-accelerated graphics card is just as good as a GTX 680 with the sole exception of VRAM (which matters only to the point of being able to hold your entire texture pack in VRAM at the same time). So much time is spent in Minecraft redundantly transforming vertices and transferring over the system bus, that even the aforementioned shittiest graphics chipset is going to finish its rendering of the paltry couple triangles Minecraft gave it before it even receives the next batch. You should really read upon the history of OpenGL and DirectX: It's rather fascinating. http://en.wikipedia.org/wiki/OpenGL#History
metalchic Posted November 1, 2012 Posted November 1, 2012 hmm so that means the Pentium 4 HT at 3.2GHz with a 2x400mhz data bus to the DDR400 dual channel ram had a significant advantage over the Pentium T2060 with its 1.6GHz core 533mhz bus speed and dual channel DDR2 533? augh im having a hard time keeping these FSB numbers straight, is it 2x533 (for 1066 effective) or is it 2x266 (for 533 effective) becasue i know the Pentium 4 HT lists its speed as 3.2/512/800/1.4 but the T2060 lists as 1.6/1M/533/1.3 does this mean that technically the T2060 has a slower bus than the Pentium 4? I mean i guess i could see that with the second channel in the dual channel ram setup being delegated more towards the shared GMA950 video memory... Augh i thought i had this figured out now im not sure, and then also if i upgrade to a Core 2 Duo T5600 1.83GHz dual core what impact might that have on the system, with DDR2 667 ram and also since part of the GMA950's shaders are handled in software a more powerful cpu would result in higher performance across the board? And since minecraft being more CPU bound the CPU would be a more influential upgrade anyway because of the way this is handled? But minecraft wont function without 3D acceleration (attempting to run Minecraft in opensuse without the i915 derivatives video package generated a 'no suitable renderer found' error same as trying to run in windows with just the Standard graphics adapter driver) according to error messages and now I'm more confused than ever on another question is this an apropriate place to discuss this or do i need to shut up?
jakj Posted November 1, 2012 Author Posted November 1, 2012 If your VRAM is actually your RAM and you don't have dedicated VRAM, that adds a whole other ball of wax, your system bus becomes even -more- important, and your RAM's clock speed and latency comes into the picture in a big way. Forget shaders, because Minecraft doesn't use them: It's purely fixed-function with client-side vertex arrays that are reconstructed every object every frame. Hands down, the CPU's single-core speed is your #1 upgrade for Minecraft.
metalchic Posted November 1, 2012 Posted November 1, 2012 sorry i have a terrible habit of smashing my questions together in a horrific manner, you answered the GMA950 question, so ultimately for everything the GMA950 can do a faster cpu and ram will improve the laptop's performance likely to be significant due to upping the FSB, ram speed and core clock speed. the second question was about my GeForce GTX 550 Ti 1gb, in my desktop computer. it has dedicated 1gb of vram separate from the 4gb of system memory. does this card having more vram help could minecraft benefit from more vram or is it texture pack dependent? or is it completely irreverent and my E7500 with 4gb of ram at 1333 fsb doing all the work and my video card is doing absolutely nothing?
jakj Posted November 1, 2012 Author Posted November 1, 2012 Having dedicated VRAM is great for keeping textures on the card, so you get a benefit from that. My one gig card handles up to 128x packs.With no texture pack, Minecraft would barely touch a fraction of that gig but it's better than shared memory.
andrewdonshik Posted November 1, 2012 Posted November 1, 2012 Having dedicated VRAM is great for keeping textures on the card, so you get a benefit from that. My one gig card handles up to 128x packs.With no texture pack, Minecraft would barely touch a fraction of that gig but it's better than shared memory. Yea, 128x graphics make my card cry.. I don't really like anything above 32x anyway, even though 64x is usually fine. (Actually, I don't think I've tried 128 on my current card. Testing time.)
MrFly Posted November 1, 2012 Posted November 1, 2012 Radeon 4250 is mine and it can run almost anything.
metalchic Posted November 2, 2012 Posted November 2, 2012 ok so i setup the Pentium 4 computer with a different Pentium 4, its setup like this: Pentium 4 2GHz 400MHz FSB 512kb L2 Cache mPGA478 4GB DDR400 ram (under clocked to DDR266 with 1.33 memory divider) nVidia GeForce4 Ti 4600-8x 128MB AGP 8x Soyo SY-PI875P DRAGON 2 Platinum Edition motherboard (quite the relic i know) HP OEM WiFi card of inconsequence PCI Primary hard drive is UltraATA/133 connected to the soyo's integrated highpoint IDE raid card OS is Ubuntu 12.04 LTS I used a couple of different drivers the default driver that comes with Ubuntu i believe its called nouveau only provides 2D acceleration for nVidia cards upon starting minecraft with this driver i got blasted with a load of artifacts under the buttons on the title screen and less than 1fps. I replaced this driver with nVidia's official driver for GeForce4 in Linux version 96.43.23. I setup technic pack to run through magic launcher and added optifine multithreaded (the same setup i use on my desktop computer) unfortunately my first test world dropped me into a jungle (idk if they are for you but even my desktop is draged down to 70~ fps in jungles) so i skiped the regular map test and did a flatland map test where i managed to get this picture: ( did nothing i hope the insert image resizes it :S ) looking at nothing it could get up to 60fps but i don't think that's a fair test, the jungle terrain could get up to 24-30 fps also but that brings me to my final thought here. the actual chunk loading seems to be whats messing up the framerates. Pressing esc causes the chunks in view to load much faster and would bring up the framerate in the jungle until i started moving and it needed to load more chunks. I think in this situation it might be more on the CPU because its not just loading and rendering the chunks but also generating them at this time. (i just tested this by loading an existing map that my desktop had already generated out to extreme view distance and the loading was not any faster.) So this is confusing as the framerate is only slightly lower than the 3.2GHz Pentium 4 with the same specs on the rest of the computer, do you think its because the 3.2GHz is only mildly more powerful than the 2GHz version or because of something else? (i feel like I'm missing something i wanted to say here...)
jakj Posted November 2, 2012 Author Posted November 2, 2012 Well, after a certain point you may get disk bound. Even my gtx460 3.4ghz and super fast ram gets me 60 only after loading the chunks. An ssd would maybe get you a couple frames more but not worth the money just for Minecraft because the CPU limit would kick right back in. I'd say take what you have and enjoy it, personally.
LazDude2012 Posted November 2, 2012 Posted November 2, 2012 You'll probably want a dual-core processor. Or just take what you've got, yeah.
jakj Posted November 2, 2012 Author Posted November 2, 2012 Dual core will help very little, beyond letting the OS run concurrently. The OS shouldn't be doing much anyway, ideally.
GreenWolf13 Posted November 2, 2012 Posted November 2, 2012 Dual core will help very little, beyond letting the OS run concurrently. The OS shouldn't be doing much anyway, ideally. What if you have other process open at the same time? Say you have Eclipse IDE, Firefox (with several tabs, one of them being a YouTube video), and Minecraft (possibly a recompiled version of a mod you're testing) running, as well as several folders. Would having dual core help?
metalchic Posted November 2, 2012 Posted November 2, 2012 thats the thing, this is a learning experiment not for the betterment of my minecraft desktop experience, my desktop computer has a Core 2 Duo E7500 cpu overclocked to 3.6GHz that pushes minecraft around at over 150fps. Just about everything i know about computers is self taught. I'm trying to further my understanding of how this stuff works, i was made painfully aware that i didn't know enough last time i started to try and learn programming. It also furthers my understanding of how to problem solve these issues in the future by better knowing what why and where slowdowns occur with minecraft to try and help alleviate these kinds of problems in the future. i cant tell if I'm making any kind of sense here. What if you have other process open at the same time? Say you have Eclipse IDE, Firefox (with several tabs, one of them being a YouTube video), and Minecraft (possibly a recompiled version of a mod you're testing) running, as well as several folders. Would having dual core help? multiple cores would definitely help this multitasking but you wouldn't get more performance from minecraft, just less slowdown.
jakj Posted November 2, 2012 Author Posted November 2, 2012 Be careful about using Minecraft as any sort of performance metric: It does things in ways most people would never even have imagined, and it requires a whole different set of adjustments for performance than any real software you may use.
metalchic Posted November 2, 2012 Posted November 2, 2012 Be careful about using Minecraft as any sort of performance metric: It does things in ways most people would never even have imagined, and it requires a whole different set of adjustments for performance than any real software you may use. fjkahsiwfhasdilfasdoiejwfhwsdifuashdfiusdhfaksjwefdlsfjf i spent way to long writing and rewriting this post. a hobby of mine is to trash pick old computers and put them together in unintended ways and I've been looking into minecraft as one of the various tools i use to measure the performance of such computers. so i can further push them in directions science was not meant to go (cough cough P90 paired up with a GeForce FX 5950 cough cough)
SimpleGuy Posted November 2, 2012 Posted November 2, 2012 To extend jakj's statement: Be careful with any sort of performance metric: There are so many measuring sticks out there, and not all of them measure equally on the same systems. For example, next-generation supercomputers using heterogeneous platforms (GPU/CPU pairs) can tout all the peta and zeta FLOPS they want. But that's an old metric back in the day when cpu floating point operations per unit time was the limiting factor. The current limiting factor for them is busing data between nodes and duplicating unnecessarily data in software (for 3-D problems, up to an order of magnitude worth more data and calculations). So I can have a faster FLOPS supercomputer that is actually slower because of the busing issue.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now