WARNING: This website is obsolete! Please follow this link to get to the new Albert@Home website!
[New release] BRP app v1.23/1.24 (OpenCL) feedback thread |
Message boards :
Problems and Bug Reports :
[New release] BRP app v1.23/1.24 (OpenCL) feedback thread
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · Next
Author | Message |
---|---|
TRuEQ & TuVaLu Send message Joined: 11 Sep 06 Posts: 75 Credit: 615,315 RAC: 0 |
I've ran a few tasks with 1.24 and it looks fine. 60%-70% GPU usage and 0.932CPU but it doesn't use more then 40% and has an avg cpu usage of 25%. It works well with the 0.5 option and is able to run alongside Milkyway SETI POEM and Primegrid without a problem. |
TRuEQ & TuVaLu Send message Joined: 11 Sep 06 Posts: 75 Credit: 615,315 RAC: 0 |
I must add this: When running a cpu project on 1 core and albert gpu task starts then the cpu project goes into "waiting to run state". Any chance that you can make albert GPU task work so it is a GPU task and not as it is now a cpu task?? |
Bikeman (Heinz-Bernd Eggenstein) Volunteer moderator Project administrator Project developer Send message Joined: 28 Aug 06 Posts: 1483 Credit: 1,864,017 RAC: 0 |
Hi all, In preparation for "the launch",we are currently experimenting with validator settings. This will cause an artificially high rate of invalid results in the next few hours, but this allows us to collect some important data. So nothing to worry about :-) Cheers HB |
Christoph Send message Joined: 25 Aug 05 Posts: 48 Credit: 208,211 RAC: 0 |
I have my older task done but it is still running. On trying to save all Messages to Memory to dump them BM is hanging again. Will report via Alpha Email list. Christoph |
[VENETO] boboviz Send message Joined: 6 Oct 06 Posts: 7 Credit: 344,106 RAC: 0 |
I don't understand. With 1.23 version i have the strange "stop and go" behavior. Now with 1.24, another problem..... With version 1.23, my 4-core cpu runs 3 wu cpu of another project and 1 gpu wu of A@H on my gpu card, now the A@H gpu indicates the use of 0.95cpu, but i run 4 cpu wu and the gpu wu is VERY slow (40% after 5h). If i crunch only 3 cpu wu (suspend others), the wu gpu accelerates and finish very fast. I try to restart client, but nothing changes. I don't use xml configuration file. |
TRuEQ & TuVaLu Send message Joined: 11 Sep 06 Posts: 75 Credit: 615,315 RAC: 0 |
I don't know if this is a BM thing or albert app thing. But the downloaded tasks shows an estimated runtime of 20hours and they all get done with about 1-2hours. I use BM 7.0.27 All other projects adjust as they should....Except POEM@Home that sometimes show strangly numbers.... Have I run to few tasks?? |
TRuEQ & TuVaLu Send message Joined: 11 Sep 06 Posts: 75 Credit: 615,315 RAC: 0 |
I am now running 1 albert task on my ati 5850 with 0.932cpu and 1 seti ap task on the same gpu I also noticed 2 cpu tasks running at the same time on my 2 cores. 1 of the cpu projects runs fully on 1 core and the other cpu project runs on the albert core with 0.1 cpu I cannot be sure that the cpu cores use different cores though... I lack the knowlegde in how to pursue the threads. My conclusion is that I know have 1 cpu project running with 0.1 cpu on the same core that feeds the GPU for albert. Any chance if this is correct that you can free some more % from albert task that runs? Albert only uses below 50% of 1 cpu core. |
Ver Greeneyes Send message Joined: 18 Nov 11 Posts: 6 Credit: 861,017 RAC: 0 |
I cannot be sure that the cpu cores use different cores though... Your operating system's scheduler should take care of this unless you force specific applications to use specific cores. Basically the way it works is that applications/threads get 'time slices' from the OS scheduler, which is how it can run multiple applications side by side on a single core. Between time slices the scheduler might decide to continue to run a thread on a different core depending on how busy each core is - that's why you generally see even single-threaded applications using a bit of each core: because they spend about equal time running on each one. |
X1900AIW Send message Joined: 6 May 12 Posts: 2 Credit: 435,065 RAC: 0 |
New day, new test: it´s running ! RAM-usage System: 208 MB RAM max., 84 MB at the moment GPU: 43 percent usage with (4,596 CPUs + 1 ATI GPU) GPU: 95 percent usage with (3,596 CPUs + 1 ATI GPU) GPU Temp: 50 stock, 65 degrees @Albert@home estimated runtime for the Albert-Workunit: 24 hours, dead line in 14 days. It runs slowly, but with noticeable lags @95 percent GPU usage. CPU usage in BOINC is suboptimal with 3,596 CPUs. |
Infusioned Send message Joined: 11 Feb 05 Posts: 45 Credit: 149,000 RAC: 0 |
These wu's show BRPCUDA32 v1.25 throwing errors: http://albert.phys.uwm.edu/workunit.php?wuid=69412 http://albert.phys.uwm.edu/workunit.php?wuid=70631 http://albert.phys.uwm.edu/workunit.php?wuid=70986 http://albert.phys.uwm.edu/workunit.php?wuid=71008 Most of which are from the same host (GTX 480), with one from this host (GTX285). <core_client_version>7.0.25</core_client_version> <![CDATA[ <message> Cannot create a symbolic link in a registry key that already has subkeys or values. (0x3fc) - exit code 1020 (0x3fc) </message> <stderr_txt> Activated exception handling... [08:07:13][4260][INFO ] Starting data processing... [08:07:13][4260][ERROR] Couldn't initialize CUDA driver API (error: 100)! [08:07:13][4260][ERROR] Demodulation failed (error: 1020)! 08:07:13 (4260): called boinc_finish </stderr_txt> ]]> Also, the BRPSSE3 v1.22 client is throwing errors: http://albert.phys.uwm.edu/workunit.php?wuid=70871 http://albert.phys.uwm.edu/workunit.php?wuid=70837 (from the same host) <core_client_version>6.10.60</core_client_version> <![CDATA[ <message> too many exit(0)s </message> ]]> |
Christoph Send message Joined: 25 Aug 05 Posts: 48 Credit: 208,211 RAC: 0 |
So, finally catched a running task. HD5450 1gb memory max workgroup 128. Memory use as per GPU-Z: 416 dedicated, around 70 dynamic. GPU load 96% Christoph |
terencewee* Send message Joined: 2 Feb 12 Posts: 5 Credit: 4,500 RAC: 0 |
v1.24 The app still corrupts the screen with square dots during the initial start-up. But there is not driver restart. Memory usage is 369MB(dedicated), ~39MB(dynamic). Seems faster. Will be running consecutive WUs this round using this host. 1st result reported. 2nd WU ran fine without any square dots on screen. Memory usage is higher. 475MB(dedicated), ~38MB(dynamic). -- terencewee* Sicituradastra. |
terencewee* Send message Joined: 2 Feb 12 Posts: 5 Credit: 4,500 RAC: 0 |
First WU awaiting validation. Second WU completed & validated against a CUDA device. Good job! Processing third WU - no square dots on screen. Memory usage back to 369MB(dedicated), ~38MB(dynamic). Looks like there may be a problem with initial run. Scenario 1: Reboot > BOINC > run WU. Square dots on screen, no driver restart. Consecutive WU ran fine, with no square dots on screen, no driver restart. Scenario 2: Reboot > runs some apps > BOINC > run WU. Will report back tomorrow. -- terencewee* Sicituradastra. |
Bikeman (Heinz-Bernd Eggenstein) Volunteer moderator Project administrator Project developer Send message Joined: 28 Aug 06 Posts: 1483 Credit: 1,864,017 RAC: 0 |
Hi all! What we are beginning to see as a trend is that HD 6900 series cards have a far harder time to produce cross-validating results than both older and younger cards (meaning they seem to produce less accurate results with the current app). The difference is not dramatic but I wonder whether the HD 6900 owners are experiencing this in other projects as well? Cheers HB |
Infusioned Send message Joined: 11 Feb 05 Posts: 45 Credit: 149,000 RAC: 0 |
Wow. I find that very strange as the 69xx series cards are double precision vs. the single precision of the NVIDIA and single precision AMD (54xx-57xx, 63xx-68xx, 73xx-76xx) cards. At Milkyway double precision cards are required. I haven't had any validation errors with my 6950. http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units |
Bikeman (Heinz-Bernd Eggenstein) Volunteer moderator Project administrator Project developer Send message Joined: 28 Aug 06 Posts: 1483 Credit: 1,864,017 RAC: 0 |
The Einstein@Hom app does not need (and does not use) any double precision arithmetic on the GPU, so this should not be a factor. At the moment the higher validation failure rate for 6900 series cards is just an observation of correlation, no claim of causality :-), as the number of cards on the Albert@Home project is just too small. It could be an indirect effect, e.g. the FFT lib could choose to switch to a different, but less accurate, code path on 6900 cards because of differences in the runtime characteristics. We'll look into it. Any experience wrt this from other projects is welcome. Cheers HB |
Infusioned Send message Joined: 11 Feb 05 Posts: 45 Credit: 149,000 RAC: 0 |
The Einstein@Home app does not need (and does not use) any double precision arithmetic on the GPU, so this should not be a factor. I am aware. The point I was trying to make, though, was that how the math is coded matters greatly and does impact precision of the final answer. Let's take for example pi^16 (exaggerated for show) with 3 different approximations for pi. 3 9 27 81 243 729 2187 6561 19683 59049 177147 531441 1594323 4782969 14348907 43046721 3.1 9.61 29.791 92.3521 286.29151 887.503681 2751.261411 8528.910374 26439.62216 81962.8287 254084.769 787662.7838 2441754.63 7569439.352 23465261.99 72742312.17 3.141592654 9.869604401 31.00627668 97.40909103 306.0196848 961.3891936 3020.293228 9488.531016 29809.09933 93648.04748 294204.018 924269.1815 2903677.271 9122171.182 28658145.97 90032220.84 I did these in excel with the last set of calculations using the actual pi() function in excel (which obviously shows decimal truncations). So, **in general**, the more precision you start with, the better your final answer (depending on a host of other things I forget from my numerical computation class), but you pay for it with computation time. But I'm sure I'm not telling you guys anything new. Just out of curiosity, was the Einstein app ever run in double precision and compared to results of single precision calculations? I presume it was based on "does not need", but I'd be interested to know the difference.
All my above hot air aside, I could have sworn I remember reading somewhere about the accuracy of OpenCL results and a statement to the effect of "it seems AMD has ditched some precision in lieu of speed", however I thought that was rectified with new catalyst drivers. Maybe send a PM to Raistmer on the Seti@Home Beta boards. I'm more than positive he will know (I think he's the one who originally posted it). |
Bikeman (Heinz-Bernd Eggenstein) Volunteer moderator Project administrator Project developer Send message Joined: 28 Aug 06 Posts: 1483 Credit: 1,864,017 RAC: 0 |
Hi all! With the help of your continued support, we were able to put the ATI/OpenCL app into production on Einstein@Home today. [url]http://einstein.phys.uwm.edu/forum_thread.php?id=9446[\url] The apps are the same as used here, but note that the minimum BOINC client version was again increased to 7.0.27 (the most recent development version atm). We will continue to improve the app so that we will again need beta testers at Albert@Home in the near future, but just now you will probably want to scale back the work at Albert a bit and throw your ATI-cards on the Einstein@Home production project. Thanks again, HB |
Christoph Send message Joined: 25 Aug 05 Posts: 48 Credit: 208,211 RAC: 0 |
Hi all! That sounds great. You will cancel the unsent WUs? Then we could just keep our machines polling the servers and will get new work and apps as soon as you have them. Christoph |
zombie67 [MM] Send message Joined: 10 Oct 06 Posts: 130 Credit: 30,924,459 RAC: 0 |
My 7970 is producing nothing but validation errors: http://albert.phys.uwm.edu/results.php?hostid=2209&offset=0&show_names=0&state=4&appid= Any ideas why? Dublin, California Team: SETI.USA |