Message boards : Number crunching : Beyond newbie Q&A
Previous · 1 · 2 · 3 · 4 · 5 · Next
Author | Message |
---|---|
EW-3 Send message Joined: 1 Sep 06 Posts: 27 Credit: 2,561,427 RAC: 0 |
Thanks guys, as usual I learn a lot here. Am curious about performance (it's in my blood) While we can compare hardware with benchmark programs, is there any metric that actually measures a programs performance for a given platform? Sort of like saying this program uses x% floating point, y% integer, z% generic instructions. Guess we could add the number of memory accesses required per instruction as a further metric. so we have x1, x2, x3.... |
Mats Petersson Send message Joined: 29 Sep 05 Posts: 225 Credit: 951,788 RAC: 0 |
Thanks guys, as usual I learn a lot here. There's software that can measure the actual counts of different classes of instructions and memory accesses (not specifically splitting Integer and generic instructions, as they are, in my mind, the same thing - what's the difference between an ADD instruciton used to generate an address and an ADD instruciton used to calculate the result of some mathematcial formula? If the calculation is integer, you use integer instructions, but they aren't different from the ones the compiler uses to calculate various internal stuff that the compiler needs to compute just to get the result of some calculation). Although some of the results may not be precise or the actual value you get may only infer the data you actually asked for (for example you may not be able to see how many cycles are used by the ALU, but only how many cycles are stalled because the ALU is too busy to run the next instruction in the pipeline). Of course, you could also look at the binary of the executable to figure out how much of the code is written to use which part of the processor (and from this you could possibly infer if the result of a calculation is "integer math" or "address calculation" - since this sort of analyzer could analyze more than single instructions and follow the flow of operations - not that it REALLY makes a whole lot of difference). On x86 the number of memory accesses per instruction will always be less or equal to one [with a couple of really rare exceptons, the obvious one being CMPXCHG16B that does two consecutive 8-byte accesses on the bus with an implied LOCK, so we could modify 16 bytes in one instruction). All the common instructions allow only one operand to be a memory operand. Unfortunately, the more complex question here is how many of the memory operands are fetched from L1 cache, how many from L2 cache and how many from "real" memory? That is a much more important question than how many memory accesses the application does, as there's a large amount of extra time spent on a "real" memroy access compared to an L1 cache access, and L2 cache falling somewhere in the middle between the two. -- Mats |
EW-3 Send message Joined: 1 Sep 06 Posts: 27 Credit: 2,561,427 RAC: 0 |
[/quote] There's software that can measure the actual counts of different classes of instructions and memory accesses (not specifically splitting Integer and generic instructions, as they are, in my mind, the same thing - what's the difference between an ADD instruciton used to generate an address and an ADD instruciton used to calculate the result of some mathematcial formula? If the calculation is integer, you use integer instructions, but they aren't different from the ones the compiler uses to calculate various internal stuff that the compiler needs to compute just to get the result of some calculation). Although some of the results may not be precise or the actual value you get may only infer the data you actually asked for (for example you may not be able to see how many cycles are used by the ALU, but only how many cycles are stalled because the ALU is too busy to run the next instruction in the pipeline). Of course, you could also look at the binary of the executable to figure out how much of the code is written to use which part of the processor (and from this you could possibly infer if the result of a calculation is "integer math" or "address calculation" - since this sort of analyzer could analyze more than single instructions and follow the flow of operations - not that it REALLY makes a whole lot of difference). On x86 the number of memory accesses per instruction will always be less or equal to one [with a couple of really rare exceptons, the obvious one being CMPXCHG16B that does two consecutive 8-byte accesses on the bus with an implied LOCK, so we could modify 16 bytes in one instruction). All the common instructions allow only one operand to be a memory operand. Unfortunately, the more complex question here is how many of the memory operands are fetched from L1 cache, how many from L2 cache and how many from "real" memory? That is a much more important question than how many memory accesses the application does, as there's a large amount of extra time spent on a "real" memroy access compared to an L1 cache access, and L2 cache falling somewhere in the middle between the two. -- Mats [/quote] Hadn't thought about the cache memory, that does add a big degree of complexity. Some of that would be the process swap by the OS, but more important would seem to be designing a tight efficient loop to do iterative tasks. Have wondered about something and this might be the right place to ask this question. I have tried to bump Rosetta to a realtime priority in XP (home) and it bombs I have to go back to start the WU all over again. Is that a function of the OS? Reason I tried this was to avoid swapping, which would tend to let the L1 and the L2 just sit there and scream... |
Mats Petersson Send message Joined: 29 Sep 05 Posts: 225 Credit: 951,788 RAC: 0 |
I have tried to bump Rosetta to a realtime priority in XP (home) and it bombs I have to go back to start the WU all over again. Is that a function of the OS? Reason I tried this was to avoid swapping, which would tend to let the L1 and the L2 just sit there and scream... That's strange, when I tried that, it worked just fine, but the system was just completely useless - since all the prcessor time was given to the Rosetta task, and no CPU-time was available to my actions... It also seemed like BOINC was monitoring the priority and resetting it, but I could have that wrong... It sholdn't crash, just not let any other tasks (that aren't also real-time tasks). However, the idea of is that real-time tasks shouldn't run for very long periods at a time, and thus let other tasks run for some time every now and again, and I don't really expect everything in the system to "work fine" if there's tasks running 100% cpu-time for _VERY_ long periods of time - and ther may even be essential tasks that need to be run at lower priority that may cause the system to crash if they are not performed... Cache-content remains between task-switches - but of course, the new task will need to load some things from memory, which will most likely be using the cache, and if the new task is sufficiently complex and/or long-running, it will most likely thrash the entire contents of the cache. But a task-switch to a small process that just updates a few variables shouldn't touch more than a few lines in the cache for the code and data accesssed. If the system isn't really being used for something else [in which case you probably want that to be done, rather than run Rosetta], I wouldn't expect more any measurable difference between real-time and idle-time level priorities -- Mats |
EW-3 Send message Joined: 1 Sep 06 Posts: 27 Credit: 2,561,427 RAC: 0 |
I have tried to bump Rosetta to a realtime priority in XP (home) and it bombs I have to go back to start the WU all over again. Is that a function of the OS? Reason I tried this was to avoid swapping, which would tend to let the L1 and the L2 just sit there and scream... Just a guess - you are running Linux and XP Pro on your machines, my box has XP home. |
Mats Petersson Send message Joined: 29 Sep 05 Posts: 225 Credit: 951,788 RAC: 0 |
Just a guess - you are running Linux and XP Pro on your machines, my box has XP home. Sure do - but I doubt that there's any difference in XP Pro and Home when it comes to how the kernel works, or what happens if you run all available CPU's at Real-Time priority. There are however, other factors that may differ - you probably don't have the same hardware in your machine as I have in mine, for example... Or the same amount of memory per CPU, etc, etc. The deep internals of XP Pro and XP Home are exactly the same, but XP Pro has a few extra features on the sides that make it more suitable for an office (for example, you can connect to a Windows Domain with Pro, which you can't with Home). Number of supported processors are also different, but that shouldn't make any difference in my case, as I'm only using a single core on my machine anyways... -- Mats |
Feet1st Send message Joined: 30 Dec 05 Posts: 1755 Credit: 4,690,520 RAC: 0 |
Q: How can I setup BOINC to capture additional debug information to help with problem determination? A: There are some simple XML files that can be modified with a simple word processing program, or Wordpad for Windows. By default, these properties are set to not capture debug information. These debug features are a part of BOINC and so BOINC has a page that describes their use: Link to the BOINC client debugging page (thanks fluffy chicken for the info.) Add this signature to your EMail: Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might! https://boinc.bakerlab.org/rosetta/ |
Feet1st Send message Joined: 30 Dec 05 Posts: 1755 Credit: 4,690,520 RAC: 0 |
Q: Why is Rosetta attempting to access the internet directly? I thought all internet access was done via BOINC. A: If you have a firewall that limits which applications can access the internet, you may see times when rosetta_5.41_windows_intelx86.exe is trying to access the internet (note that the version embedded in the name changes over time, and the exact name is slightly different on non-Windows platforms). This indicates that a failure capture routine in the Rosetta code has been invoked. It is attempting to report some additional diadnostic information back to the project to help determine why a failure occured in your environment. This is debug data, not the normal BOINC output file, so it doesn't go through BOINC to send it. Please allow the Rosetta application to report this information. This will help the project team eliminate bugs. Add this signature to your EMail: Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might! https://boinc.bakerlab.org/rosetta/ |
River~~ Send message Joined: 15 Dec 05 Posts: 761 Credit: 285,578 RAC: 0 |
Q: Why is Rosetta attempting to access the internet directly? I thought all internet access was done via BOINC. There is an issue here. Background: Modern firewalls, the ones that claim to be 'proactive', seek to learn which apps contact the net legitimately and which are suspicious. Recent versions of the Kaspersky firewall, and of ZoneAlarm, do this; no doubt many others. When an app that has never contacted the net before trues to make contact the first time, the firewall puts up a message window asking the user if this is legitmate, and the user can tell the firewall to allow it just this once, or to 'add it to the trusted zone' (in Kaspersky speak) so that future accesses are allowed silently. Each time Rosetta releases a new version, the name of the app changes (the bold numbers in the previous posting). This means that the upload of the debug info pauses till the user intervenes to allow it. On an unattended machine this means a long wait, during which the machine is losing production. It could be overnight or over a weekend; or even longer on a dedicated BOINC box. The issues: are that there is lost production time, and that users may therefore be tempted to disallow the reporting of debug info. That, in my opinion, would be a mistake. Bad solutions: To stop doing the debug response would be inappropriate - the info is too valuable to the devs. To call all app versions by the same name would get unworkable due to issues of filenames when versions change (there is a transition when work from more than one version may be held locally, and on a multi-cpu box there can even be different versions running at the same time) My suggested workaround: 1. For Rosetta to publish the exact URL that the app contacts when it reports this info, then 2. To incorporate this into a FAQ about 'how do I allow Rosetta to report debug info thru a firewall' 3. For users to explicitly add that URL to the trusted zone of their firewall. The easy option for users is to do so for all running apps, a more labour intensive option is to do this explicitly for each new version of Rosetta. With some firewalls you may be able to use wild cards in the app name perhaps? That would be ideal but I don't know it it is possible. Regards, River |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
River, according to one user report, this is the address: 207.46.248.241 => msdl.microsoft.com Perhaps you could use that to confirm adding to trusted zone resolves the issue as you intend. Rosetta Moderator: Mod.Sense |
River~~ Send message Joined: 15 Dec 05 Posts: 761 Credit: 285,578 RAC: 0 |
River, according to one user report, this is the address: In that case I am confused. I thought this was a debug report going back to bakerlab, but that address is a microsoft debug catcher. I agree that if people allow microsoft to collect bug info from their boxes then they should also set the firewall to allow the packets out without prompting, but what I actually had in mind was to allow debug packets going to bakerlab. Of course, I can find the current address of bakerlab with nslookup or any similar utility, but by asking for this to be 'published' I was really meaning that if the app is going to send debug info out directly to bakerlab (rather than via the boinc client) then baker labe should 'publish' the IP in the sense that it is a) documented somewhere online b) there is a committment to tell users if the IP changes rather than relying only in the nameserver system. Unpablished changes to the IP that arrive only via the nameserver changing would, of course, cause proactive firewalls to start prompting for advice. River~~ |
Procyon Lotor One Send message Joined: 6 Jun 06 Posts: 3 Credit: 152,026 RAC: 0 |
Is there a selection you can make so that you only get 5.96 projects - that is no mini versions at all? |
Greg_BE Send message Joined: 30 May 06 Posts: 5691 Credit: 5,859,226 RAC: 0 |
Is there a selection you can make so that you only get 5.96 projects - that is no mini versions at all? no, you get what the project computer has in queue that fits your systems setup |
Angus Send message Joined: 17 Sep 05 Posts: 412 Credit: 321,053 RAC: 0 |
Just abort the broken minis. Proudly Banned from Predictator@Home and now Cosmology@home as well. Added SETI to the list today. Temporary ban only - so need to work harder :) "You can't fix stupid" (Ron White) |
KWSN THE Holy Hand Grenade! Send message Joined: 3 May 07 Posts: 5 Credit: 2,542,452 RAC: 0 |
I've had a problem... and am not sure what thread to post this in, being a relative newcomer to the Rosetta boards... (moderator: feel free to move this to the proper thread...) I'm running a Mini 1.34 WU (abinitio_nohomfrag_70_A_4icbA_4466_73178_0) and, at 79.207% done, the task hung... on an Opteron dual core, with SETI (production, not Beta, which I also run...) running in the other core. My (windows) task manager reads 50%, with 50% idle, and the mini-Rosetta task is in memory. The Tasks tab reads "Running, High Priority" but no advance was being made... Things I tried are: suspending the boinc client did nothing, but restarting the Boinc Client fixed the problem, why would this happen? |
Greg_BE Send message Joined: 30 May 06 Posts: 5691 Credit: 5,859,226 RAC: 0 |
hi, welcome to the project. you can repost this over in the problems with 1.34 thread (not the exact title of the thread)then add in a weblink to the task in question. you can also post the same thing in the long running thread. I've had a problem... and am not sure what thread to post this in, being a relative newcomer to the Rosetta boards... |
Andrew Butterworth Send message Joined: 9 Jun 09 Posts: 1 Credit: 115,333 RAC: 0 |
I want to run my own protein structure predictions on Rosetta, is this possible? |
Max DesGeorges Send message Joined: 1 Oct 05 Posts: 35 Credit: 942,527 RAC: 0 |
|
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
The link is to how BakerLab can help you use their program to work on your protein. The work will be done in BakerLab though, not run by volunteer computers as part of Rosetta@home. Rosetta Moderator: Mod.Sense |
Mack Send message Joined: 27 Apr 07 Posts: 3 Credit: 185,160 RAC: 0 |
I'm not new to BOINC as I have about 750,000 units completed on SETI. I have noticed that since I joined Rosetta yesterday two units ran and then did not upload due to a computation error - is this normal? |
Message boards :
Number crunching :
Beyond newbie Q&A
©2024 University of Washington
https://www.bakerlab.org