Summary: | Task affinity using srun with Intel MPI | ||
---|---|---|---|
Product: | Slurm | Reporter: | Karsten Kutzer <kkutzer> |
Component: | Configuration | Assignee: | Felip Moll <felip.moll> |
Status: | RESOLVED INFOGIVEN | QA Contact: | |
Severity: | 4 - Minor Issue | ||
Priority: | --- | CC: | felip.moll |
Version: | 19.05.x | ||
Hardware: | Linux | ||
OS: | Linux | ||
Site: | Lenovo | Alineos Sites: | --- |
Atos/Eviden Sites: | --- | Confidential Site: | --- |
Coreweave sites: | --- | Cray Sites: | --- |
DS9 clusters: | --- | HPCnow Sites: | --- |
HPE Sites: | --- | IBM Sites: | --- |
NOAA SIte: | --- | OCF Sites: | --- |
Recursion Pharma Sites: | --- | SFW Sites: | --- |
SNIC sites: | --- | Linux Distro: | --- |
Machine Name: | CLE Version: | ||
Version Fixed: | Target Release: | --- | |
DevPrio: | --- | Emory-Cloud Sites: | --- |
Description
Karsten Kutzer
2019-02-05 03:25:19 MST
Hi Karsten, First of all I changed the severity of this bug to sev-4. Sev-2 is for really urgent and critical matters. At most I would say this is a sev-3 but the machine is not yet in production, so.. You can read more about our policies here: https://www.schedmd.com/support.php In what regards to the bug. I would need you to provide with the latest slurm.conf and cgroup.conf. I would also need to see the output of the following command: srun --mpi=list Have you exported I_MPI_PMI_LIBRARY variable with the correct path? Note you can also use pmi2, pmix, and so on. Here's an example of how to run an mpi job: module load intel/<whatever version> : includes intel binaries, libraries, I_MPI_PMI_LIBRARY setting, and so on. srun approach: ------------------ #!/bin/bash #SBATCH --job-name=Hello_MPI #SBATCH --nodes=2 #SBATCH --ntasks-per-node=14 srun ./hello_mpi Alternatively, use the mpirun approach: -------------------------------------------- #!/bin/bash #SBATCH --job-name=Hello_MPI #SBATCH --nodes=2 #SBATCH --ntasks-per-node=14 unset I_MPI_PMI_LIBRARY # avoid bind all tasks to single CPU core export SLURM_CPU_BIND=none mpirun -np $SLURM_NTASKS ./hello_mpi Please, provide me with the requested info and let's see what's missing. Hi Felip, thanks for responding so quickly. Sorry for the wrong severity, I will use 4 in the future… I will try to run MPI as you suggested, but want to get the other information you asked for out now… The environment variable was set like this: export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so i01r01c01s01:~ # file /usr/lib64/libpmi.so /usr/lib64/libpmi.so: symbolic link to libpmi.so.0.0.0 i01r01c01s01:~ # file /usr/lib64/libpmi.so.0.0.0 /usr/lib64/libpmi.so.0.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=61569b106d772725dea52c9cbe8bb1e39b2d5942, not stripped Here the output you asked for: sl01:/etc/slurm # cat cgroup.conf ### # # Slurm cgroup support configuration file # # See man slurm.conf and man cgroup.conf for further # information on cgroup configuration parameters #-- CgroupAutomount=yes ConstrainCores=yes TaskAffinity=yes ConstrainSwapSpace=yes # ??? AllowedSwapSpace=0 # ??? ConstrainRAMSpace=yes MaxRAMPercent=90 sl01:/etc/slurm # cat slurm.conf # Script: /etc/slurm/slurm.conf # # Maintainer: pmayes@lenovo.com # Modified: 2018-06-19 # Last modified: 2018-07-19 # Last modified: 2018-12-10 # # Description: # Main Slurm configuration file for SuperMUC-NG # # Basics # ClusterName=supermucng ControlMachine=sl01 ControlAddr=sl01opa #BackupController=sl02 #BackupAddr=sl02opa SlurmUser=slurm SlurmctldPort=6817 SlurmdPort=6818 AuthType=auth/munge # # State and Control # StateSaveLocation=/var/spool/slurm/ctld SlurmdSpoolDir=/var/spool/slurm SwitchType=switch/none MpiDefault=pmi2 SlurmctldPidFile=/var/run/slurm/slurmctld.pid SlurmdPidFile=/var/run/slurm/slurmd.pid ProctrackType=proctrack/cgroup # PrivateData=accounts,jobs,nodes,reservations,usage,users ??????????????????????? SallocDefaultCommand="srun -n1 -N1 --mem-per-cpu=0 --pty --preserve-env --cpu_bind=no --mpi=none $SHELL" PropagateResourceLimits=ALL # ??????????????????????? #PropagateResourceLimitsExcept=CPU,NPROC,NOFILE,AS # FMoll #PropagateResourceLimitsExcept=CPU,NPROC,AS # FMoll JobSubmitPlugins=lua # # Prologs and Epilogs # Prolog=/etc/slurm/scripts/Prolog Epilog=/etc/slurm/scripts/Epilog SrunProlog=/etc/slurm/scripts/SrunProlog SrunEpilog=/etc/slurm/scripts/SrunEpilog TaskProlog=/etc/slurm/scripts/TaskProlog TaskEpilog=/etc/slurm/scripts/TaskEpilog PrologFlags=Alloc,Contain # FMoll # # Node Health Check # HealthCheckProgram=/usr/sbin/nhc HealthCheckInterval=600 HealthCheckNodeState=IDLE #TaskPlugin=task/cgroup,task/affinity ??????????????????????? #TaskPlugin=task/affinity # # Timers # SlurmctldTimeout=420 # CHK, was 300 SlurmdTimeout=420 # CHK, was 300 ResumeTimeout=420 # CHK, wasn't set before BatchStartTimeout=20 # FMoll CompleteWait=15 # FMoll PrologEpilogTimeout=120 # FMoll #MessageTimeout=30 # FMoll MessageTimeout=60 # CHK, wasn't set before MinJobAge=600 # FMoll InactiveLimit=0 #KillWait=30 KillWait=2 Waittime=0 TCPTimeout=15 # CHK KillOnBadExit=1 UnkillableStepTimeout=120 # Added by Peter to see if it improves the "Kill task failed" draining issue UnkillableStepProgram=/etc/slurm/UnkillableStepProgram.sh # # Scheduling # MaxJobCount=15000 SchedulerType=sched/backfill FastSchedule=1 #SchedulerAuth= SelectType=select/cons_res # FMoll #SelectType=select/linear ??????????????????????? SelectTypeParameters=CR_CPU_Memory # FMoll PriorityType=priority/multifactor PriorityWeightAge=1000000 PriorityWeightJobSize=500000 PriorityWeightPartition=500000 PriorityMaxAge=14-0 SchedulerParameters=bf_window=10080,default_queue_depth=10000,bf_interval=30,bf_resolution=1800,bf_max_job_test=3000,bf_max_job_user=800,bf_continue # FMoll # # # Launching # LaunchParameters=send_gids # # Logging # # DebugFlags=NO_CONF_HASH stops the constant warnings about slurm.conf not being the # same everywhere. This is because we include /etc/slurm.specific.conf, which is # different on every node # DebugFlags=NO_CONF_HASH # ,Energy SlurmctldDebug=debug #SlurmctldDebug=info SlurmctldLogFile=/var/log/slurm/slurmctld.log SlurmdDebug=debug #SlurmdDebug=info #SlurmdLogFile=/var/log/slurm/slurmd.%n.log #include /etc/slurm/slurm.specific.conf include /etc/slurm.specific.conf JobCompType=jobcomp/filetxt JobCompLoc=/var/log/slurm/job_completion.txt # # Accounting # JobAcctGatherType=jobacct_gather/linux JobAcctGatherFrequency=30 AccountingStorageType=accounting_storage/slurmdbd AccountingStorageHost=localhost # AccountingStorageEnforce=associations,qos,limits # ??????????????????????? AcctGatherEnergyType=acct_gather_energy/xcc AcctGatherNodeFreq=30 # EnforcePartLimits=ALL ??????????????????????? # # Topology # TopologyPlugin=topology/tree TreeWidth=22 # FMoll #TreeWidth=7000 # CHK # # Compute Nodes # PartitionName=DEFAULT Default=NO OverSubscribe=EXCLUSIVE State=UP PartitionName=bm Nodes=f01r[01-02]c[01-06]s[01-12],i[01-08]r[01-11]c[01-06]s[01-12] #PartitionName=bm Nodes=i[01-08]r[01-11]c[01-06]s[01-12] PartitionName=test Nodes=i01r[01-11]c[01-06]s[01-12] AllowQOS=test MinNodes=1 MaxNodes=16 MaxTime=00:30:00 Default=YES PartitionName=fat Nodes=f01r[01-02]c[01-06]s[01-12] AllowQOS=fat MinNodes=1 MaxNodes=128 MaxTime=48:00:00 PriorityJobFactor=0 PartitionName=micro Nodes=i[01-02]r[01-11]c[01-06]s[01-12] AllowQOS=micro MinNodes=1 MaxNodes=16 MaxTime=48:00:00 PriorityJobFactor=0 PartitionName=general Nodes=i[02-06]r[01-11]c[01-06]s[01-12] AllowQOS=general MinNodes=17 MaxNodes=792 MaxTime=48:00:00 PriorityJobFactor=70 PartitionName=large Nodes=i[03-08]r[01-11]c[01-06]s[01-12] AllowQOS=large MinNodes=793 MaxNodes=3168 MaxTime=12:00:00 PriorityJobFactor=100 #PartitionName=tmp1 Nodes=f01r[01-02]c[01-06]s[01-12],i[01-08]r[01-11]c[01-06]s[01-12] AllowQOS=tmp1 AllowGroups=vip #PartitionName=tmp2 Nodes=f01r[01-02]c[01-06]s[01-12],i[01-08]r[01-11]c[01-06]s[01-12] AllowQOS=tmp2 AllowGroups=vip #PartitionName=tmp3 Nodes=f01r[01-02]c[01-06]s[01-12],i[01-08]r[01-11]c[01-06]s[01-12] AllowQOS=tmp3 AllowGroups=vip #NodeName=DEFAULT CPUs=96 Sockets=2 CoresPerSocket=24 ThreadsPerCore=2 RealMemory=96322 Features=hot,thin,PROJECT1,PROJECT2,SCRATCH NodeName=DEFAULT CPUs=96 Sockets=2 CoresPerSocket=24 ThreadsPerCore=2 RealMemory=88258 # # Thin Node Island 1 # NodeName=DEFAULT Features=i01,hot,thin,work,scratch,dss NodeName=i01r01c[01-06]s[01-12] NodeAddr=172.16.192.[1-72] NodeName=i01r02c[01-06]s[01-12] NodeAddr=172.16.192.[81-152] NodeName=i01r03c[01-06]s[01-12] NodeAddr=172.16.192.[161-232] NodeName=i01r04c[01-06]s[01-12] NodeAddr=172.16.193.[1-72] NodeName=i01r05c[01-06]s[01-12] NodeAddr=172.16.193.[81-152] NodeName=i01r06c[01-06]s[01-12] NodeAddr=172.16.193.[161-232] NodeName=i01r07c[01-06]s[01-12] NodeAddr=172.16.194.[1-72] NodeName=i01r08c[01-06]s[01-12] NodeAddr=172.16.194.[81-152] NodeName=i01r09c[01-06]s[01-12] NodeAddr=172.16.194.[161-232] NodeName=i01r10c[01-06]s[01-12] NodeAddr=172.16.195.[1-72] NodeName=i01r11c[01-06]s[01-12] NodeAddr=172.16.195.[81-152] # # Thin Node Island 2 # NodeName=DEFAULT Features=i02,hot,thin,work,scratch,dss NodeName=i02r01c[01-06]s[01-12] NodeAddr=172.16.196.[1-72] NodeName=i02r02c[01-06]s[01-12] NodeAddr=172.16.196.[81-152] NodeName=i02r03c[01-06]s[01-12] NodeAddr=172.16.196.[161-232] NodeName=i02r04c[01-06]s[01-12] NodeAddr=172.16.197.[1-72] NodeName=i02r05c[01-06]s[01-12] NodeAddr=172.16.197.[81-152] NodeName=i02r06c[01-06]s[01-12] NodeAddr=172.16.197.[161-232] NodeName=i02r07c[01-06]s[01-12] NodeAddr=172.16.198.[1-72] NodeName=i02r08c[01-06]s[01-12] NodeAddr=172.16.198.[81-152] NodeName=i02r09c[01-06]s[01-12] NodeAddr=172.16.198.[161-232] NodeName=i02r10c[01-06]s[01-12] NodeAddr=172.16.199.[1-72] NodeName=i02r11c[01-06]s[01-12] NodeAddr=172.16.199.[81-152] # # Thin Node Island 3 # NodeName=DEFAULT Features=i03,hot,thin,work,scratch,dss NodeName=i03r01c[01-06]s[01-12] NodeAddr=172.16.200.[1-72] NodeName=i03r02c[01-06]s[01-12] NodeAddr=172.16.200.[81-152] NodeName=i03r03c[01-06]s[01-12] NodeAddr=172.16.200.[161-232] NodeName=i03r04c[01-06]s[01-12] NodeAddr=172.16.201.[1-72] NodeName=i03r05c[01-06]s[01-12] NodeAddr=172.16.201.[81-152] NodeName=i03r06c[01-06]s[01-12] NodeAddr=172.16.201.[161-232] NodeName=i03r07c[01-06]s[01-12] NodeAddr=172.16.202.[1-72] NodeName=i03r08c[01-06]s[01-12] NodeAddr=172.16.202.[81-152] NodeName=i03r09c[01-06]s[01-12] NodeAddr=172.16.202.[161-232] NodeName=i03r10c[01-06]s[01-12] NodeAddr=172.16.203.[1-72] NodeName=i03r11c[01-06]s[01-12] NodeAddr=172.16.203.[81-152] # # Thin Node Island 4 # NodeName=DEFAULT Features=i04,hot,thin,work,scratch,dss NodeName=i04r01c[01-06]s[01-12] NodeAddr=172.16.204.[1-72] NodeName=i04r02c[01-06]s[01-12] NodeAddr=172.16.204.[81-152] NodeName=i04r03c[01-06]s[01-12] NodeAddr=172.16.204.[161-232] NodeName=i04r04c[01-06]s[01-12] NodeAddr=172.16.205.[1-72] NodeName=i04r05c[01-06]s[01-12] NodeAddr=172.16.205.[81-152] NodeName=i04r06c[01-06]s[01-12] NodeAddr=172.16.205.[161-232] NodeName=i04r07c[01-06]s[01-12] NodeAddr=172.16.206.[1-72] NodeName=i04r08c[01-06]s[01-12] NodeAddr=172.16.206.[81-152] NodeName=i04r09c[01-06]s[01-12] NodeAddr=172.16.206.[161-232] NodeName=i04r10c[01-06]s[01-12] NodeAddr=172.16.207.[1-72] NodeName=i04r11c[01-06]s[01-12] NodeAddr=172.16.207.[81-152] # # Thin Node Island 5 # NodeName=DEFAULT Features=i05,cold,thin,work,scratch,dss NodeName=i05r01c[01-06]s[01-12] NodeAddr=172.16.208.[1-72] NodeName=i05r02c[01-06]s[01-12] NodeAddr=172.16.208.[81-152] NodeName=i05r03c[01-06]s[01-12] NodeAddr=172.16.208.[161-232] NodeName=i05r04c[01-06]s[01-12] NodeAddr=172.16.209.[1-72] NodeName=i05r05c[01-06]s[01-12] NodeAddr=172.16.209.[81-152] NodeName=i05r06c[01-06]s[01-12] NodeAddr=172.16.209.[161-232] NodeName=i05r07c[01-06]s[01-12] NodeAddr=172.16.210.[1-72] NodeName=i05r08c[01-06]s[01-12] NodeAddr=172.16.210.[81-152] NodeName=i05r09c[01-06]s[01-12] NodeAddr=172.16.210.[161-232] NodeName=i05r10c[01-06]s[01-12] NodeAddr=172.16.211.[1-72] NodeName=i05r11c[01-06]s[01-12] NodeAddr=172.16.211.[81-152] # # Thin Node Island 6 # NodeName=DEFAULT Features=i06,cold,thin,work,scratch,dss NodeName=i06r01c[01-06]s[01-12] NodeAddr=172.16.212.[1-72] NodeName=i06r02c[01-06]s[01-12] NodeAddr=172.16.212.[81-152] NodeName=i06r03c[01-06]s[01-12] NodeAddr=172.16.212.[161-232] NodeName=i06r04c[01-06]s[01-12] NodeAddr=172.16.213.[1-72] NodeName=i06r05c[01-06]s[01-12] NodeAddr=172.16.213.[81-152] NodeName=i06r06c[01-06]s[01-12] NodeAddr=172.16.213.[161-232] NodeName=i06r07c[01-06]s[01-12] NodeAddr=172.16.214.[1-72] NodeName=i06r08c[01-06]s[01-12] NodeAddr=172.16.214.[81-152] NodeName=i06r09c[01-06]s[01-12] NodeAddr=172.16.214.[161-232] NodeName=i06r10c[01-06]s[01-12] NodeAddr=172.16.215.[1-72] NodeName=i06r11c[01-06]s[01-12] NodeAddr=172.16.215.[81-152] # # Thin Node Island 7 # NodeName=DEFAULT Features=i07,cold,thin,work,scratch,dss NodeName=i07r01c[01-06]s[01-12] NodeAddr=172.16.216.[1-72] NodeName=i07r02c[01-06]s[01-12] NodeAddr=172.16.216.[81-152] NodeName=i07r03c[01-06]s[01-12] NodeAddr=172.16.216.[161-232] NodeName=i07r04c[01-06]s[01-12] NodeAddr=172.16.217.[1-72] NodeName=i07r05c[01-06]s[01-12] NodeAddr=172.16.217.[81-152] NodeName=i07r06c[01-06]s[01-12] NodeAddr=172.16.217.[161-232] NodeName=i07r07c[01-06]s[01-12] NodeAddr=172.16.218.[1-72] NodeName=i07r08c[01-06]s[01-12] NodeAddr=172.16.218.[81-152] NodeName=i07r09c[01-06]s[01-12] NodeAddr=172.16.218.[161-232] NodeName=i07r10c[01-06]s[01-12] NodeAddr=172.16.219.[1-72] NodeName=i07r11c[01-06]s[01-12] NodeAddr=172.16.219.[81-152] # # Thin Node Island 8 # NodeName=DEFAULT Features=i08,cold,thin,work,scratch,dss NodeName=i08r01c[01-06]s[01-12] NodeAddr=172.16.220.[1-72] NodeName=i08r02c[01-06]s[01-12] NodeAddr=172.16.220.[81-152] NodeName=i08r03c[01-06]s[01-12] NodeAddr=172.16.220.[161-232] NodeName=i08r04c[01-06]s[01-12] NodeAddr=172.16.221.[1-72] NodeName=i08r05c[01-06]s[01-12] NodeAddr=172.16.221.[81-152] NodeName=i08r06c[01-06]s[01-12] NodeAddr=172.16.221.[161-232] NodeName=i08r07c[01-06]s[01-12] NodeAddr=172.16.222.[1-72] NodeName=i08r08c[01-06]s[01-12] NodeAddr=172.16.222.[81-152] NodeName=i08r09c[01-06]s[01-12] NodeAddr=172.16.222.[161-232] NodeName=i08r10c[01-06]s[01-12] NodeAddr=172.16.223.[1-72] NodeName=i08r11c[01-06]s[01-12] NodeAddr=172.16.223.[81-152] # # Fat Node Island # NodeName=DEFAULT RealMemory=773697 NodeName=DEFAULT Features=f01,fat,work,scratch,dss NodeName=f01r01c[01-06]s[01-12] NodeAddr=172.16.224.[1-72] NodeName=f01r02c[01-06]s[01-12] NodeAddr=172.16.224.[81-152] sl01:/etc/slurm # srun --mpi=list srun: MPI types are... srun: pmi2 srun: none srun: openmpi Mit freundlichen Grüßen / best regards Karsten Kutzer Sr. Sales Engineer High Performance Computing kkutzer@lenovo.com Mobile: +49 171 97 12 448 Phone: +49 711 656 90 774 Lenovo Global Technology Germany GmbH, Meitnerstrasse 9, 70563 Stuttgart Geschäftsführung: Christophe Philippe Marie Laurent und Colm Brendan Gleeson (jeweils einzelvertretungsberechtigt) Prokura: Dieter Stehle & Henrik Bächle (Einzelprokura) Sitz der Gesellschaft: Stuttgart; HRB-Nr.: 758298, AG Stuttgart; WEEE-Reg.-Nr.: DE79679404 From: bugs@schedmd.com [mailto:bugs@schedmd.com] Sent: 5 February, 2019 17:00 To: Karsten Kutzer Subject: [External] [Bug 6452] Task affinity using srun with Intel MPI Felip Moll<mailto:felip.moll@schedmd.com> changed bug 6452<https://bugs.schedmd.com/show_bug.cgi?id=6452> What Removed Added Hours Worked 0.50 Group Lenovo Assignee support@schedmd.com<mailto:support@schedmd.com> felip.moll@schedmd.com<mailto:felip.moll@schedmd.com> Comment # 1<https://bugs.schedmd.com/show_bug.cgi?id=6452#c1> on bug 6452<https://bugs.schedmd.com/show_bug.cgi?id=6452> from Felip Moll<mailto:felip.moll@schedmd.com> Hi Karsten, First of all I changed the severity of this bug to sev-4. Sev-2 is for really urgent and critical matters. At most I would say this is a sev-3 but the machine is not yet in production, so.. You can read more about our policies here: https://www.schedmd.com/support.php In what regards to the bug. I would need you to provide with the latest slurm.conf and cgroup.conf. I would also need to see the output of the following command: srun --mpi=list Have you exported I_MPI_PMI_LIBRARY variable with the correct path? Note you can also use pmi2, pmix, and so on. Here's an example of how to run an mpi job: module load intel/<whatever version> : includes intel binaries, libraries, I_MPI_PMI_LIBRARY setting, and so on. srun approach: ------------------ #!/bin/bash #SBATCH --job-name=Hello_MPI #SBATCH --nodes=2 #SBATCH --ntasks-per-node=14 srun ./hello_mpi Alternatively, use the mpirun approach: -------------------------------------------- #!/bin/bash #SBATCH --job-name=Hello_MPI #SBATCH --nodes=2 #SBATCH --ntasks-per-node=14 unset I_MPI_PMI_LIBRARY # avoid bind all tasks to single CPU core export SLURM_CPU_BIND=none mpirun -np $SLURM_NTASKS ./hello_mpi Please, provide me with the requested info and let's see what's missing. ________________________________ You are receiving this mail because: * You reported the bug. Karsten:
Please ensure this is the Slurm library and not coming from another package (just to be sure):
export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so
An 'ldd /usr/lib64/libpmi.so' will show whether it is linked to Slurm.
>sl01:/etc/slurm # srun --mpi=list
>srun: MPI types are...
>srun: pmi2
>srun: none
>srun: openmpi
You can also try with --mpi=pmi2 switch when running a job, but I also see this is your default
in slurm.conf, so really no need to.
Then, I see you have not set the task plugin correctly in slurm.conf, please set:
TaskPlugin=task/cgroup,task/affinity
and in cgroup.conf comment out this line or set it to No:
# TaskAffinity=yes
Restart slurmctld and retry.
What's in /etc/slurm.specific.conf ?
I also suggest to disable this: AcctGatherNodeFreq=30, it will just create noise and is not needed, the real
XCC frequency plugin for jobs is set in acct_gather.conf.
Check also this:
TreeWidth=22 # FMoll
This number has to be the set to the square root of the number of nodes in the cluster for systems having no more than
2500 nodes or the cube root for larger systems.
Do this changes and tell me how it goes.
I want to add a little bit more information here. To run with Intel MPI and pmi2, you should set: I_MPI_PMI_LIBRARY=<path_to_slurm>/libpmi2.so srun --mpi=pmi2 ... You can even unset the I_MPI_PMI_LIBRARY and Intel MPI will fallback to its default (which I think is some internal pmi2 implementation, not sure): unset I_MPI_PMI_LIBRARY srun --mpi=pmi2 ... Then, for the affinity, please follow my comment 3: So... please set slurm.conf: TaskPlugin=task/cgroup,task/affinity and in cgroup.conf comment out this line or set it to No: # TaskAffinity=yes and keep this value: ConstrainCores=yes Restart slurmctld/slurmd and retry. Hi Karsten, Did you finally applied the suggested changes? Is everything working properly now? If everything is fine, can I close the bug? Thanks I am closing this issue for now. Please if you do any progress and see that is not working, just reopen again. Regards |