Re: [VFS] Implementing custom hdfs file system using commons-vfs 2.0

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: [VFS] Implementing custom hdfs file system using commons-vfs 2.0

dlmarion
Glad that you were able to make it work. When it failed for you, were you executing the clean lifecylcle target for maven? It should work in consecutive runs with mvn clean. I did not test with consecutive runs without the clean target being executed.



<div>-------- Original message --------</div><div>From: Bernd Eckenfels <[hidden email]> </div><div>Date:01/10/2015  8:37 PM  (GMT-05:00) </div><div>To: Commons Developers List <[hidden email]> </div><div>Cc:  </div><div>Subject: Re: [VFS] Implementing custom hdfs file system using commons-vfs  2.0 </div><div>
</div>Hello,

with this commit I added a cleanup of the data dir before the
DfsMiniCluster is started. I also use absolute file names to make
debugging a bit easier and I moved initialisation code to the
setUp() method

http://svn.apache.org/r1650847 & http://svn.apache.org/r1650852

This way the test do not error out anymore. But I have no idea why this
was happening on one machine and not on others (maybe a race, the
failing machine had SSD?).

So this means, now I can concentrate on merging the new version.

Gruss
Bernd


Am Sun, 11 Jan 2015 01:25:48 +0100 schrieb Bernd Eckenfels
<[hidden email]>:

> Hello,
>
> Am Sat, 10 Jan 2015 03:12:19 +0000 (UTC)
> schrieb [hidden email]:
>
> > Bernd,
> >
> > Regarding the Hadoop version for VFS 2.1, why not use the latest on
> > the first release of the HDFS provider? The Hadoop 1.1.2 release was
> > released in Feb 2013.
>
> Yes, you are right. We dont need to care about 2.0 as this is a new
> provider. I will make the changes, just want to fix the current test
> failures I see first.
>
>
> > I just built 2.1-SNAPSHOT over the holidays with JDK 6, 7, and 8 on
> > Ubuntu. What type of test errors are you getting? Testing is
> > disabled on Windows unless you decide to pull in windows artifacts
> > attached to VFS-530. However, those artifacts are associated with
> > patch 3 and are for Hadoop 2.4.0. Updating to 2.4.0 would also be
> > sufficient in my opinion.
>
> Yes, what I mean is: I typically build under Windows so I would not
> notice if the test starts to fail. However it seems to pass on the
> integration build:
>
> https://continuum-ci.apache.org/continuum/projectView.action?projectId=129&amp;projectGroupId=16
>
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> Starting DataNode 0 with dfs.data.dir:
> target/build/test/data/dfs/data/data1,target/build/test/data/dfs/data/data2
> Cluster is active Cluster is active Tests run: 13, Failures: 0,
> Errors: 0, Skipped: 0, Time elapsed: 11.821 sec - in
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
> Starting DataNode 0 with dfs.data.dir:
> target/build/test2/data/dfs/data/data1,target/build/test2/data/dfs/data/data2
> Cluster is active Cluster is active Tests run: 76, Failures: 0,
> Errors: 0, Skipped: 0, Time elapsed: 18.853 sec - in
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
>
> Anyway, on a Ubuntu, I get this exception currently:
>
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
> Starting DataNode 0 with dfs.data.dir:
> target/build/test/data/dfs/data/data1,tar
> get/build/test/data/dfs/data/data2 Cluster is active Cluster is
> active Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time
> elapsed: 1.486 sec <<< FA
> ILURE! - in
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
> junit.framework.TestSuite@56c77035(org.apache.commons.vfs2.provider.hdfs.test.Hd
> fsFileProviderTestCase$HdfsProviderTestSuite)  Time elapsed: 1.479
> sec  <<< ERRO                                         R!
> java.lang.RuntimeException: Error setting up mini cluster at
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase$H
> dfsProviderTestSuite.setUp(HdfsFileProviderTestCase.java:112) at
> org.apache.commons.vfs2.test.AbstractTestSuite$1.protect(AbstractTest
> Suite.java:148) at
> junit.framework.TestResult.runProtected(TestResult.java:142) at
> org.apache.commons.vfs2.test.AbstractTestSuite.run(AbstractTestSuite.
> java:154) at
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.
> java:86) at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provide
> r.java:283) at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUni
> t4Provider.java:173) at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4
> Provider.java:153) at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider                                         .java:128)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameCla
> ssLoader(ForkedBooter.java:203) at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(Fork
> edBooter.java:155) at
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:
> 103) Caused by: java.io.IOException: Cannot lock storage
> target/build/test/data/dfs/n
> ame1. The directory is already locked. at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(St
> orage.java:599) at
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13
> 27) at
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13
> 45) at
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
> 1207) at
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
> 187) at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:268)
> at
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase$H
> dfsProviderTestSuite.setUp(HdfsFileProviderTestCase.java:107) ... 11
> more
>
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest Tests
> run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.445 sec
> - in            
>
> When I delete the core/target/build/test/data/dfs/ directory and then
> run the ProviderTest I can do that multiple times and it works:
>
>   mvn surefire:test
> -Dtest=org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
>
> But when I run all tests or the HdfsFileProviderTestCase then it
> fails and afterwards not even the ProviderTest suceeds until I delete
> that dir.
>
> (I suspect the "locking" is a missleading error, looks more like the
> data pool has some kind of instance ID which it does not have at the
> next run)
>
> Looks like TestCase has a problem and ProviderTest does no proper
> pre-cleaning. Will check the source. More generally speaking it
> should not use a fixed working directory anyway.
>
>
> > I started up Hadoop 2.6.0 on my laptop, created a directory and
> > file, then used the VFS shell to list and view the contents
> > (remember, HDFS provider is read-only currently). Here is the what
> > I did:
>
> Looks good. I will shorten it a bit and add it to the wiki. BTW: the
> warning, is this something we can change?
>
> Gruss
> Bernd


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [VFS] Implementing custom hdfs file system using commons-vfs 2.0

dlmarion
Regarding the warning, it is something the user can change in their hdfs configuration files. It comes from the hdfs client object, not the vfs code.



<div>-------- Original message --------</div><div>From: Bernd Eckenfels <[hidden email]> </div><div>Date:01/10/2015  7:25 PM  (GMT-05:00) </div><div>To: [hidden email] </div><div>Cc: Commons Developers List <[hidden email]> </div><div>Subject: Re: [VFS] Implementing custom hdfs file system using commons-vfs  2.0 </div><div>
</div>Hello,

Am Sat, 10 Jan 2015 03:12:19 +0000 (UTC)
schrieb [hidden email]:

> Bernd,
>
> Regarding the Hadoop version for VFS 2.1, why not use the latest on
> the first release of the HDFS provider? The Hadoop 1.1.2 release was
> released in Feb 2013.

Yes, you are right. We dont need to care about 2.0 as this is a new
provider. I will make the changes, just want to fix the current test
failures I see first.


> I just built 2.1-SNAPSHOT over the holidays with JDK 6, 7, and 8 on
> Ubuntu. What type of test errors are you getting? Testing is disabled
> on Windows unless you decide to pull in windows artifacts attached to
> VFS-530. However, those artifacts are associated with patch 3 and are
> for Hadoop 2.4.0. Updating to 2.4.0 would also be sufficient in my
> opinion.

Yes, what I mean is: I typically build under Windows so I would not
notice if the test starts to fail. However it seems to pass on the
integration build:

https://continuum-ci.apache.org/continuum/projectView.action?projectId=129&amp;projectGroupId=16

Running org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
Starting DataNode 0 with dfs.data.dir: target/build/test/data/dfs/data/data1,target/build/test/data/dfs/data/data2
Cluster is active
Cluster is active
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.821 sec - in org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
Running org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
Starting DataNode 0 with dfs.data.dir: target/build/test2/data/dfs/data/data1,target/build/test2/data/dfs/data/data2
Cluster is active
Cluster is active
Tests run: 76, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.853 sec - in org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase

Anyway, on a Ubuntu, I get this exception currently:

Running org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
Starting DataNode 0 with dfs.data.dir: target/build/test/data/dfs/data/data1,tar                                         get/build/test/data/dfs/data/data2
Cluster is active
Cluster is active
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.486 sec <<< FA                                         ILURE! - in org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
junit.framework.TestSuite@56c77035(org.apache.commons.vfs2.provider.hdfs.test.Hd                                         fsFileProviderTestCase$HdfsProviderTestSuite)  Time elapsed: 1.479 sec  <<< ERRO                                         R!
java.lang.RuntimeException: Error setting up mini cluster
        at org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase$H                                         dfsProviderTestSuite.setUp(HdfsFileProviderTestCase.java:112)
        at org.apache.commons.vfs2.test.AbstractTestSuite$1.protect(AbstractTest                                         Suite.java:148)
        at junit.framework.TestResult.runProtected(TestResult.java:142)
        at org.apache.commons.vfs2.test.AbstractTestSuite.run(AbstractTestSuite. java:154)
        at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.                                         java:86)
        at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provide                                         r.java:283)
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUni                                         t4Provider.java:173)
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4                                         Provider.java:153)
        at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider                                         .java:128)
        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameCla                                         ssLoader(ForkedBooter.java:203)
        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(Fork edBooter.java:155)
        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java: 103)
Caused by: java.io.IOException: Cannot lock storage target/build/test/data/dfs/n                                         ame1. The directory is already locked.
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(St                                         orage.java:599)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13                                         27)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13                                         45)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:                                         1207)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:                                         187)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:268)
        at org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase$H                                         dfsProviderTestSuite.setUp(HdfsFileProviderTestCase.java:107)
        ... 11 more

Running org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.445 sec - in            

When I delete the core/target/build/test/data/dfs/ directory and then run the ProviderTest I can do that multiple times and it works:

  mvn surefire:test -Dtest=org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest

But when I run all tests or the HdfsFileProviderTestCase then it fails and afterwards not even the ProviderTest suceeds until I delete that dir.

(I suspect the "locking" is a missleading error, looks more like the data pool has some kind of instance ID which it does not have at the next run)

Looks like TestCase has a problem and ProviderTest does no proper pre-cleaning. Will check the source. More generally speaking it should not use a fixed working directory anyway.


> I started up Hadoop 2.6.0 on my laptop, created a directory and file,
> then used the VFS shell to list and view the contents (remember, HDFS
> provider is read-only currently). Here is the what I did:

Looks good. I will shorten it a bit and add it to the wiki. BTW: the warning, is this something we can change?

Gruss
Bernd
Reply | Threaded
Open this post in threaded view
|

AW: [VFS] Implementing custom hdfs file system using commons-vfs 2.0

Bernd Eckenfels
In reply to this post by dlmarion
Yes, it failed with clean as well.

I am currently let the Site build run in a Loop and it seems to be stable.

Gruss
Bernd

--
http://bernd.eckenfels.net

----- Ursprüngliche Nachricht -----
Von: "dlmarion" <[hidden email]>
Gesendet: ‎11.‎01.‎2015 02:57
An: "Commons Developers List" <[hidden email]>
Betreff: Re: [VFS] Implementing custom hdfs file system using commons-vfs  2.0

Glad that you were able to make it work. When it failed for you, were you executing the clean lifecylcle target for maven? It should work in consecutive runs with mvn clean. I did not test with consecutive runs without the clean target being executed.



<div>-------- Original message --------</div><div>From: Bernd Eckenfels <[hidden email]> </div><div>Date:01/10/2015  8:37 PM  (GMT-05:00) </div><div>To: Commons Developers List <[hidden email]> </div><div>Cc:  </div><div>Subject: Re: [VFS] Implementing custom hdfs file system using commons-vfs  2.0 </div><div>
</div>Hello,

with this commit I added a cleanup of the data dir before the
DfsMiniCluster is started. I also use absolute file names to make
debugging a bit easier and I moved initialisation code to the
setUp() method

http://svn.apache.org/r1650847 & http://svn.apache.org/r1650852

This way the test do not error out anymore. But I have no idea why this
was happening on one machine and not on others (maybe a race, the
failing machine had SSD?).

So this means, now I can concentrate on merging the new version.

Gruss
Bernd


Am Sun, 11 Jan 2015 01:25:48 +0100 schrieb Bernd Eckenfels
<[hidden email]>:

> Hello,
>
> Am Sat, 10 Jan 2015 03:12:19 +0000 (UTC)
> schrieb [hidden email]:
>
> > Bernd,
> >
> > Regarding the Hadoop version for VFS 2.1, why not use the latest on
> > the first release of the HDFS provider? The Hadoop 1.1.2 release was
> > released in Feb 2013.
>
> Yes, you are right. We dont need to care about 2.0 as this is a new
> provider. I will make the changes, just want to fix the current test
> failures I see first.
>
>
> > I just built 2.1-SNAPSHOT over the holidays with JDK 6, 7, and 8 on
> > Ubuntu. What type of test errors are you getting? Testing is
> > disabled on Windows unless you decide to pull in windows artifacts
> > attached to VFS-530. However, those artifacts are associated with
> > patch 3 and are for Hadoop 2.4.0. Updating to 2.4.0 would also be
> > sufficient in my opinion.
>
> Yes, what I mean is: I typically build under Windows so I would not
> notice if the test starts to fail. However it seems to pass on the
> integration build:
>
> https://continuum-ci.apache.org/continuum/projectView.action?projectId=129&amp;projectGroupId=16
>
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> Starting DataNode 0 with dfs.data.dir:
> target/build/test/data/dfs/data/data1,target/build/test/data/dfs/data/data2
> Cluster is active Cluster is active Tests run: 13, Failures: 0,
> Errors: 0, Skipped: 0, Time elapsed: 11.821 sec - in
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
> Starting DataNode 0 with dfs.data.dir:
> target/build/test2/data/dfs/data/data1,target/build/test2/data/dfs/data/data2
> Cluster is active Cluster is active Tests run: 76, Failures: 0,
> Errors: 0, Skipped: 0, Time elapsed: 18.853 sec - in
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
>
> Anyway, on a Ubuntu, I get this exception currently:
>
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
> Starting DataNode 0 with dfs.data.dir:
> target/build/test/data/dfs/data/data1,tar
> get/build/test/data/dfs/data/data2 Cluster is active Cluster is
> active Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time
> elapsed: 1.486 sec <<< FA
> ILURE! - in
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
> junit.framework.TestSuite@56c77035(org.apache.commons.vfs2.provider.hdfs.test.Hd
> fsFileProviderTestCase$HdfsProviderTestSuite)  Time elapsed: 1.479
> sec  <<< ERRO                                         R!
> java.lang.RuntimeException: Error setting up mini cluster at
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase$H
> dfsProviderTestSuite.setUp(HdfsFileProviderTestCase.java:112) at
> org.apache.commons.vfs2.test.AbstractTestSuite$1.protect(AbstractTest
> Suite.java:148) at
> junit.framework.TestResult.runProtected(TestResult.java:142) at
> org.apache.commons.vfs2.test.AbstractTestSuite.run(AbstractTestSuite.
> java:154) at
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.
> java:86) at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provide
> r.java:283) at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUni
> t4Provider.java:173) at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4
> Provider.java:153) at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider                                         .java:128)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameCla
> ssLoader(ForkedBooter.java:203) at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(Fork
> edBooter.java:155) at
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:
> 103) Caused by: java.io.IOException: Cannot lock storage
> target/build/test/data/dfs/n
> ame1. The directory is already locked. at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(St
> orage.java:599) at
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13
> 27) at
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13
> 45) at
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
> 1207) at
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
> 187) at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:268)
> at
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase$H
> dfsProviderTestSuite.setUp(HdfsFileProviderTestCase.java:107) ... 11
> more
>
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest Tests
> run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.445 sec
> - in            
>
> When I delete the core/target/build/test/data/dfs/ directory and then
> run the ProviderTest I can do that multiple times and it works:
>
>   mvn surefire:test
> -Dtest=org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
>
> But when I run all tests or the HdfsFileProviderTestCase then it
> fails and afterwards not even the ProviderTest suceeds until I delete
> that dir.
>
> (I suspect the "locking" is a missleading error, looks more like the
> data pool has some kind of instance ID which it does not have at the
> next run)
>
> Looks like TestCase has a problem and ProviderTest does no proper
> pre-cleaning. Will check the source. More generally speaking it
> should not use a fixed working directory anyway.
>
>
> > I started up Hadoop 2.6.0 on my laptop, created a directory and
> > file, then used the VFS shell to list and view the contents
> > (remember, HDFS provider is read-only currently). Here is the what
> > I did:
>
> Looks good. I will shorten it a bit and add it to the wiki. BTW: the
> warning, is this something we can change?
>
> Gruss
> Bernd


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: AW: [VFS] Implementing custom hdfs file system using commons-vfs 2.0

dlmarion
Updated to the latest commit, built with 'mvn clean install' and 'mvn clean install site'. Both succeeded, anything else you need me to try?

----- Original Message -----

From: "Bernd Eckenfels" <[hidden email]>
To: "Commons Developers List" <[hidden email]>
Sent: Saturday, January 10, 2015 9:00:37 PM
Subject: AW: [VFS] Implementing custom hdfs file system using commons-vfs 2.0

Yes, it failed with clean as well.

I am currently let the Site build run in a Loop and it seems to be stable.

Gruss
Bernd

--
http://bernd.eckenfels.net 

----- Ursprüngliche Nachricht -----
Von: "dlmarion" <[hidden email]>
Gesendet: ‎11.‎01.‎2015 02:57
An: "Commons Developers List" <[hidden email]>
Betreff: Re: [VFS] Implementing custom hdfs file system using commons-vfs 2.0

Glad that you were able to make it work. When it failed for you, were you executing the clean lifecylcle target for maven? It should work in consecutive runs with mvn clean. I did not test with consecutive runs without the clean target being executed.



<div>-------- Original message --------</div><div>From: Bernd Eckenfels <[hidden email]> </div><div>Date:01/10/2015 8:37 PM (GMT-05:00) </div><div>To: Commons Developers List <[hidden email]> </div><div>Cc: </div><div>Subject: Re: [VFS] Implementing custom hdfs file system using commons-vfs 2.0 </div><div>
</div>Hello,

with this commit I added a cleanup of the data dir before the
DfsMiniCluster is started. I also use absolute file names to make
debugging a bit easier and I moved initialisation code to the
setUp() method

http://svn.apache.org/r1650847 & http://svn.apache.org/r1650852 

This way the test do not error out anymore. But I have no idea why this
was happening on one machine and not on others (maybe a race, the
failing machine had SSD?).

So this means, now I can concentrate on merging the new version.

Gruss
Bernd


Am Sun, 11 Jan 2015 01:25:48 +0100 schrieb Bernd Eckenfels
<[hidden email]>:

> Hello,
>
> Am Sat, 10 Jan 2015 03:12:19 +0000 (UTC)
> schrieb [hidden email]:
>
> > Bernd,
> >
> > Regarding the Hadoop version for VFS 2.1, why not use the latest on
> > the first release of the HDFS provider? The Hadoop 1.1.2 release was
> > released in Feb 2013.
>
> Yes, you are right. We dont need to care about 2.0 as this is a new
> provider. I will make the changes, just want to fix the current test
> failures I see first.
>
>
> > I just built 2.1-SNAPSHOT over the holidays with JDK 6, 7, and 8 on
> > Ubuntu. What type of test errors are you getting? Testing is
> > disabled on Windows unless you decide to pull in windows artifacts
> > attached to VFS-530. However, those artifacts are associated with
> > patch 3 and are for Hadoop 2.4.0. Updating to 2.4.0 would also be
> > sufficient in my opinion.
>
> Yes, what I mean is: I typically build under Windows so I would not
> notice if the test starts to fail. However it seems to pass on the
> integration build:
>
> https://continuum-ci.apache.org/continuum/projectView.action?projectId=129&amp;projectGroupId=16 
>
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> Starting DataNode 0 with dfs.data.dir:
> target/build/test/data/dfs/data/data1,target/build/test/data/dfs/data/data2
> Cluster is active Cluster is active Tests run: 13, Failures: 0,
> Errors: 0, Skipped: 0, Time elapsed: 11.821 sec - in
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
> Starting DataNode 0 with dfs.data.dir:
> target/build/test2/data/dfs/data/data1,target/build/test2/data/dfs/data/data2
> Cluster is active Cluster is active Tests run: 76, Failures: 0,
> Errors: 0, Skipped: 0, Time elapsed: 18.853 sec - in
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
>
> Anyway, on a Ubuntu, I get this exception currently:
>
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
> Starting DataNode 0 with dfs.data.dir:
> target/build/test/data/dfs/data/data1,tar
> get/build/test/data/dfs/data/data2 Cluster is active Cluster is
> active Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time
> elapsed: 1.486 sec <<< FA
> ILURE! - in
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
> junit.framework.TestSuite@56c77035(org.apache.commons.vfs2.provider.hdfs.test.Hd
> fsFileProviderTestCase$HdfsProviderTestSuite) Time elapsed: 1.479
> sec <<< ERRO R!
> java.lang.RuntimeException: Error setting up mini cluster at
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase$H
> dfsProviderTestSuite.setUp(HdfsFileProviderTestCase.java:112) at
> org.apache.commons.vfs2.test.AbstractTestSuite$1.protect(AbstractTest
> Suite.java:148) at
> junit.framework.TestResult.runProtected(TestResult.java:142) at
> org.apache.commons.vfs2.test.AbstractTestSuite.run(AbstractTestSuite.
> java:154) at
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.
> java:86) at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provide
> r.java:283) at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUni
> t4Provider.java:173) at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4
> Provider.java:153) at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider .java:128)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameCla
> ssLoader(ForkedBooter.java:203) at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(Fork
> edBooter.java:155) at
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:
> 103) Caused by: java.io.IOException: Cannot lock storage
> target/build/test/data/dfs/n
> ame1. The directory is already locked. at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(St
> orage.java:599) at
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13
> 27) at
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13
> 45) at
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
> 1207) at
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
> 187) at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:268)
> at
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase$H
> dfsProviderTestSuite.setUp(HdfsFileProviderTestCase.java:107) ... 11
> more
>
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest Tests
> run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.445 sec
> - in
>
> When I delete the core/target/build/test/data/dfs/ directory and then
> run the ProviderTest I can do that multiple times and it works:
>
> mvn surefire:test
> -Dtest=org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
>
> But when I run all tests or the HdfsFileProviderTestCase then it
> fails and afterwards not even the ProviderTest suceeds until I delete
> that dir.
>
> (I suspect the "locking" is a missleading error, looks more like the
> data pool has some kind of instance ID which it does not have at the
> next run)
>
> Looks like TestCase has a problem and ProviderTest does no proper
> pre-cleaning. Will check the source. More generally speaking it
> should not use a fixed working directory anyway.
>
>
> > I started up Hadoop 2.6.0 on my laptop, created a directory and
> > file, then used the VFS shell to list and view the contents
> > (remember, HDFS provider is read-only currently). Here is the what
> > I did:
>
> Looks good. I will shorten it a bit and add it to the wiki. BTW: the
> warning, is this something we can change?
>
> Gruss
> Bernd


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]