[VFS] HDFS failures on Windows

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

[VFS] HDFS failures on Windows

garydgregory
Ecki enabled the HDFS tests on Windows but they sure fail for me, see below.

Do they work for anyone else on Windows?

Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 3.681 sec
<<< FAILURE! - in
org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest  Time
elapsed: 3.68 sec  <<< ERROR!
java.lang.ExceptionInInitializerError: null
        at
org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:468)
        at
org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:418)
        at
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
        at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1565)
        at
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:421)
        at
org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:284)
        at
org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest.<clinit>(HdfsFileProviderTest.java:95)

org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest  Time
elapsed: 3.681 sec  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class
org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
        at
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
        at
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
        at
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
        at
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
        at
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

Gary


--
E-Mail: [hidden email] | [hidden email]
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory
Reply | Threaded
Open this post in threaded view
|

Re: [VFS] HDFS failures on Windows

Bernd Eckenfels
Hello,

they do work for me, hm. Windows 7 x64 de. I will try some other
environments. Maybe it picks up some cygwin stuff or something on my
system?

Gruss
Bernd


Am Mon, 9 Jun 2014 18:05:18 -0400
schrieb Gary Gregory <[hidden email]>:

> Ecki enabled the HDFS tests on Windows but they sure fail for me, see
> below.
>
> Do they work for anyone else on Windows?
>
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 3.681
> sec <<< FAILURE! - in
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest  Time
> elapsed: 3.68 sec  <<< ERROR!
> java.lang.ExceptionInInitializerError: null
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:468)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:418)
>         at
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
>         at
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162) at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1565)
>         at
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:421)
>         at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:284)
>         at
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest.<clinit>(HdfsFileProviderTest.java:95)
>
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest  Time
> elapsed: 3.681 sec  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>         at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
>
> Gary
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [VFS] HDFS failures on Windows

garydgregory
woa... cygwin? I have that installed but it does not help.

How about this: can please you turn off HDFS testing on Windows like it was
before.

I'll be happy to test patches for you on my set up, Windows 7 Professional
64-bit Service Pack 1.

My understanding is that the HDFS jars we use do not run on Windows out of
the box because they rely on calling OS commands that are *Nix specific.

Gary


On Tue, Jun 10, 2014 at 9:45 AM, Bernd Eckenfels <[hidden email]>
wrote:

> Hello,
>
> they do work for me, hm. Windows 7 x64 de. I will try some other
> environments. Maybe it picks up some cygwin stuff or something on my
> system?
>
> Gruss
> Bernd
>
>
> Am Mon, 9 Jun 2014 18:05:18 -0400
> schrieb Gary Gregory <[hidden email]>:
>
> > Ecki enabled the HDFS tests on Windows but they sure fail for me, see
> > below.
> >
> > Do they work for anyone else on Windows?
> >
> > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 3.681
> > sec <<< FAILURE! - in
> > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest  Time
> > elapsed: 3.68 sec  <<< ERROR!
> > java.lang.ExceptionInInitializerError: null
> >         at
> >
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:468)
> >         at
> >
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:418)
> >         at
> >
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
> >         at
> > org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162) at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1565)
> >         at
> >
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:421)
> >         at
> > org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:284)
> >         at
> >
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest.<clinit>(HdfsFileProviderTest.java:95)
> >
> > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest  Time
> > elapsed: 3.681 sec  <<< ERROR!
> > java.lang.NoClassDefFoundError: Could not initialize class
> > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >         at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >         at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >         at java.lang.reflect.Method.invoke(Method.java:606)
> >         at
> >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> >         at
> >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> >         at
> >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> >         at
> >
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
> >         at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> >         at
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
> >         at
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> >         at
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> >         at
> >
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> >         at
> >
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> >         at
> > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> >
> > Gary
> >
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>


--
E-Mail: [hidden email] | [hidden email]
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory
Reply | Threaded
Open this post in threaded view
|

Re: [VFS] HDFS failures on Windows

Bernd Eckenfels
Hello Gary,

I wanted to reproduce your problem, but had problems with the line numbers in the stack trace. Can you check why you have different ones? When I check it on my system the line numbers match the 1.2.1 sourcde. And if I actually disable stack-trace-trimming(commited) in surefire, it actually prints a helpful error:

...
Caused by: java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: Cannot run program "ls": CreateProcess error=2, Das System kann die angegebene Datei nicht finden
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
        at org.apache.hadoop.util.Shell.run(Shell.java:182)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
        at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:712)
        at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:448)
        at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:423)
        at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
        at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1704)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1626)
        at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:421)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:284)
        at org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest.<clinit>(HdfsFileProviderTest.java:95)
...

And it actually means, that "ls.exe" is not in the PATH. So yes, the test does not work on all Windows systems, it requires at least a ls.exe. I will remove the automatic running of those tests on the Windows platform (again). (but with a better named profile).

As a quick fix it should be enough to add any ls.exe, in my case it was the portable git distribution (from github):

%LOCALAPPDATA%\GitHub\PortableGit_015aa71ef18c047ce8509ffb2f9e4bb0e3e73f13\bin\ls.exe

Gruss
Bernd










Am Tue, 10 Jun 2014 11:02:19 -0400 schrieb Gary Gregory
<[hidden email]>:

> woa... cygwin? I have that installed but it does not help.
>
> How about this: can please you turn off HDFS testing on Windows like
> it was before.
>
> I'll be happy to test patches for you on my set up, Windows 7
> Professional 64-bit Service Pack 1.
>
> My understanding is that the HDFS jars we use do not run on Windows
> out of the box because they rely on calling OS commands that are *Nix
> specific.
>
> Gary
>
>
> On Tue, Jun 10, 2014 at 9:45 AM, Bernd Eckenfels
> <[hidden email]> wrote:
>
> > Hello,
> >
> > they do work for me, hm. Windows 7 x64 de. I will try some other
> > environments. Maybe it picks up some cygwin stuff or something on my
> > system?
> >
> > Gruss
> > Bernd
> >
> >
> > Am Mon, 9 Jun 2014 18:05:18 -0400
> > schrieb Gary Gregory <[hidden email]>:
> >
> > > Ecki enabled the HDFS tests on Windows but they sure fail for me,
> > > see below.
> > >
> > > Do they work for anyone else on Windows?
> > >
> > > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed:
> > > 3.681 sec <<< FAILURE! - in
> > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > Time elapsed: 3.68 sec  <<< ERROR!
> > > java.lang.ExceptionInInitializerError: null
> > >         at
> > >
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:468)
> > >         at
> > >
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:418)
> > >         at
> > >
> > org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
> > >         at
> > > org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162)
> > > at
> > >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
> > >         at
> > >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590)
> > >         at
> > >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1565)
> > >         at
> > >
> > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:421)
> > >         at
> > > org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:284)
> > >         at
> > >
> > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest.<clinit>(HdfsFileProviderTest.java:95)
> > >
> > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > Time elapsed: 3.681 sec  <<< ERROR!
> > > java.lang.NoClassDefFoundError: Could not initialize class
> > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > > Method) at
> > >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > >         at
> > >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > >         at java.lang.reflect.Method.invoke(Method.java:606)
> > >         at
> > >
> > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> > >         at
> > >
> > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > >         at
> > >
> > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> > >         at
> > >
> > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
> > >         at
> > > org.junit.runners.ParentRunner.run(ParentRunner.java:309) at
> > >
> > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
> > >         at
> > >
> > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> > >         at
> > >
> > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> > >         at
> > >
> > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> > >         at
> > >
> > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> > >         at
> > > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> > >
> > > Gary
> > >
> > >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [hidden email]
> > For additional commands, e-mail: [hidden email]
> >
> >
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [VFS] HDFS failures on Windows

garydgregory
The build still breaks on Windows. Can you fix it please?

Gary


On Mon, Jun 16, 2014 at 8:23 PM, Bernd Eckenfels <[hidden email]>
wrote:

> Hello Gary,
>
> I wanted to reproduce your problem, but had problems with the line numbers
> in the stack trace. Can you check why you have different ones? When I check
> it on my system the line numbers match the 1.2.1 sourcde. And if I actually
> disable stack-trace-trimming(commited) in surefire, it actually prints a
> helpful error:
>
> ...
> Caused by: java.lang.RuntimeException: Error while running command to get
> file permissions : java.io.IOException: Cannot run program "ls":
> CreateProcess error=2, Das System kann die angegebene Datei nicht finden
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
>         at org.apache.hadoop.util.Shell.run(Shell.java:182)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
>         at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:712)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:448)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:423)
>         at
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
>         at
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1704)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1626)
>         at
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:421)
>         at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:284)
>         at
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest.<clinit>(HdfsFileProviderTest.java:95)
> ...
>
> And it actually means, that "ls.exe" is not in the PATH. So yes, the test
> does not work on all Windows systems, it requires at least a ls.exe. I will
> remove the automatic running of those tests on the Windows platform
> (again). (but with a better named profile).
>
> As a quick fix it should be enough to add any ls.exe, in my case it was
> the portable git distribution (from github):
>
>
> %LOCALAPPDATA%\GitHub\PortableGit_015aa71ef18c047ce8509ffb2f9e4bb0e3e73f13\bin\ls.exe
>
> Gruss
> Bernd
>
>
>
>
>
>
>
>
>
>
> Am Tue, 10 Jun 2014 11:02:19 -0400 schrieb Gary Gregory
> <[hidden email]>:
>
> > woa... cygwin? I have that installed but it does not help.
> >
> > How about this: can please you turn off HDFS testing on Windows like
> > it was before.
> >
> > I'll be happy to test patches for you on my set up, Windows 7
> > Professional 64-bit Service Pack 1.
> >
> > My understanding is that the HDFS jars we use do not run on Windows
> > out of the box because they rely on calling OS commands that are *Nix
> > specific.
> >
> > Gary
> >
> >
> > On Tue, Jun 10, 2014 at 9:45 AM, Bernd Eckenfels
> > <[hidden email]> wrote:
> >
> > > Hello,
> > >
> > > they do work for me, hm. Windows 7 x64 de. I will try some other
> > > environments. Maybe it picks up some cygwin stuff or something on my
> > > system?
> > >
> > > Gruss
> > > Bernd
> > >
> > >
> > > Am Mon, 9 Jun 2014 18:05:18 -0400
> > > schrieb Gary Gregory <[hidden email]>:
> > >
> > > > Ecki enabled the HDFS tests on Windows but they sure fail for me,
> > > > see below.
> > > >
> > > > Do they work for anyone else on Windows?
> > > >
> > > > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed:
> > > > 3.681 sec <<< FAILURE! - in
> > > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > > Time elapsed: 3.68 sec  <<< ERROR!
> > > > java.lang.ExceptionInInitializerError: null
> > > >         at
> > > >
> > >
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:468)
> > > >         at
> > > >
> > >
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:418)
> > > >         at
> > > >
> > >
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
> > > >         at
> > > > org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162)
> > > > at
> > > >
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
> > > >         at
> > > >
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590)
> > > >         at
> > > >
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1565)
> > > >         at
> > > >
> > >
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:421)
> > > >         at
> > > > org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:284)
> > > >         at
> > > >
> > >
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest.<clinit>(HdfsFileProviderTest.java:95)
> > > >
> > > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > > Time elapsed: 3.681 sec  <<< ERROR!
> > > > java.lang.NoClassDefFoundError: Could not initialize class
> > > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > > > Method) at
> > > >
> > >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > > >         at
> > > >
> > >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > >         at java.lang.reflect.Method.invoke(Method.java:606)
> > > >         at
> > > >
> > >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> > > >         at
> > > >
> > >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > > >         at
> > > >
> > >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> > > >         at
> > > >
> > >
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
> > > >         at
> > > > org.junit.runners.ParentRunner.run(ParentRunner.java:309) at
> > > >
> > >
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
> > > >         at
> > > >
> > >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> > > >         at
> > > >
> > >
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> > > >         at
> > > >
> > >
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> > > >         at
> > > >
> > >
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> > > >         at
> > > >
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> > > >
> > > > Gary
> > > >
> > > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: [hidden email]
> > > For additional commands, e-mail: [hidden email]
> > >
> > >
> >
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>


--
E-Mail: [hidden email] | [hidden email]
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory
Reply | Threaded
Open this post in threaded view
|

Re: [VFS] HDFS failures on Windows

dlmarion
In reply to this post by garydgregory
Gary, 

  Can you apply vfs-530 and see if that makes a difference?

Dave



<div>-------- Original message --------</div><div>From: Gary Gregory <[hidden email]> </div><div>Date:06/17/2014  8:26 PM  (GMT-05:00) </div><div>To: Commons Developers List <[hidden email]> </div><div>Subject: Re: [VFS] HDFS failures on Windows </div><div>
</div>The build still breaks on Windows. Can you fix it please?

Gary


On Mon, Jun 16, 2014 at 8:23 PM, Bernd Eckenfels <[hidden email]>
wrote:

> Hello Gary,
>
> I wanted to reproduce your problem, but had problems with the line numbers
> in the stack trace. Can you check why you have different ones? When I check
> it on my system the line numbers match the 1.2.1 sourcde. And if I actually
> disable stack-trace-trimming(commited) in surefire, it actually prints a
> helpful error:
>
> ...
> Caused by: java.lang.RuntimeException: Error while running command to get
> file permissions : java.io.IOException: Cannot run program "ls":
> CreateProcess error=2, Das System kann die angegebene Datei nicht finden
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
>         at org.apache.hadoop.util.Shell.run(Shell.java:182)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
>         at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:712)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:448)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:423)
>         at
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
>         at
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1704)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1626)
>         at
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:421)
>         at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:284)
>         at
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest.<clinit>(HdfsFileProviderTest.java:95)
> ...
>
> And it actually means, that "ls.exe" is not in the PATH. So yes, the test
> does not work on all Windows systems, it requires at least a ls.exe. I will
> remove the automatic running of those tests on the Windows platform
> (again). (but with a better named profile).
>
> As a quick fix it should be enough to add any ls.exe, in my case it was
> the portable git distribution (from github):
>
>
> %LOCALAPPDATA%\GitHub\PortableGit_015aa71ef18c047ce8509ffb2f9e4bb0e3e73f13\bin\ls.exe
>
> Gruss
> Bernd
>
>
>
>
>
>
>
>
>
>
> Am Tue, 10 Jun 2014 11:02:19 -0400 schrieb Gary Gregory
> <[hidden email]>:
>
> > woa... cygwin? I have that installed but it does not help.
> >
> > How about this: can please you turn off HDFS testing on Windows like
> > it was before.
> >
> > I'll be happy to test patches for you on my set up, Windows 7
> > Professional 64-bit Service Pack 1.
> >
> > My understanding is that the HDFS jars we use do not run on Windows
> > out of the box because they rely on calling OS commands that are *Nix
> > specific.
> >
> > Gary
> >
> >
> > On Tue, Jun 10, 2014 at 9:45 AM, Bernd Eckenfels
> > <[hidden email]> wrote:
> >
> > > Hello,
> > >
> > > they do work for me, hm. Windows 7 x64 de. I will try some other
> > > environments. Maybe it picks up some cygwin stuff or something on my
> > > system?
> > >
> > > Gruss
> > > Bernd
> > >
> > >
> > > Am Mon, 9 Jun 2014 18:05:18 -0400
> > > schrieb Gary Gregory <[hidden email]>:
> > >
> > > > Ecki enabled the HDFS tests on Windows but they sure fail for me,
> > > > see below.
> > > >
> > > > Do they work for anyone else on Windows?
> > > >
> > > > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed:
> > > > 3.681 sec <<< FAILURE! - in
> > > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > > Time elapsed: 3.68 sec  <<< ERROR!
> > > > java.lang.ExceptionInInitializerError: null
> > > >         at
> > > >
> > >
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:468)
> > > >         at
> > > >
> > >
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:418)
> > > >         at
> > > >
> > >
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
> > > >         at
> > > > org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162)
> > > > at
> > > >
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
> > > >         at
> > > >
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590)
> > > >         at
> > > >
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1565)
> > > >         at
> > > >
> > >
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:421)
> > > >         at
> > > > org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:284)
> > > >         at
> > > >
> > >
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest.<clinit>(HdfsFileProviderTest.java:95)
> > > >
> > > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > > Time elapsed: 3.681 sec  <<< ERROR!
> > > > java.lang.NoClassDefFoundError: Could not initialize class
> > > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > > > Method) at
> > > >
> > >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > > >         at
> > > >
> > >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > >         at java.lang.reflect.Method.invoke(Method.java:606)
> > > >         at
> > > >
> > >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> > > >         at
> > > >
> > >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > > >         at
> > > >
> > >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> > > >         at
> > > >
> > >
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
> > > >         at
> > > > org.junit.runners.ParentRunner.run(ParentRunner.java:309) at
> > > >
> > >
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
> > > >         at
> > > >
> > >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> > > >         at
> > > >
> > >
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> > > >         at
> > > >
> > >
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> > > >         at
> > > >
> > >
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> > > >         at
> > > >
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> > > >
> > > > Gary
> > > >
> > > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: [hidden email]
> > > For additional commands, e-mail: [hidden email]
> > >
> > >
> >
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>


--
E-Mail: [hidden email] | [hidden email]
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory
Reply | Threaded
Open this post in threaded view
|

Re: [VFS] HDFS failures on Windows

Bernd Eckenfels
In reply to this post by garydgregory
Am Tue, 17 Jun 2014 20:26:11 -0400
schrieb Gary Gregory <[hidden email]>:

> The build still breaks on Windows. Can you fix it please?

Sure I am working on it, it is tracked under VFS-529.


> > I wanted to reproduce your problem, but had problems with the line
> > numbers in the stack trace. Can you check why you have different
> > ones? When I check it on my system the line numbers match the 1.2.1
> > sourcde.

That was actually caused by a local modification on my side (switched
to newer hadoop to see if it helps), so thats why the lines did not
match. But both versions have basically the same problem. In 2.x there
seems to be some better windows support but with specific setup
requirements, so it might be required to disable it there as well
(VFS-530).

Gruss
Bernd



 And if I actually disable stack-trace-trimming(commited)

> > in surefire, it actually prints a helpful error:
> >
> > ...
> > Caused by: java.lang.RuntimeException: Error while running command
> > to get file permissions : java.io.IOException: Cannot run program
> > "ls": CreateProcess error=2, Das System kann die angegebene Datei
> > nicht finden at
> > java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) at
> > org.apache.hadoop.util.Shell.runCommand(Shell.java:200) at
> > org.apache.hadoop.util.Shell.run(Shell.java:182) at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
> >         at
> > org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:712) at
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:448)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:423)
> >         at
> > org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
> >         at
> > org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162)
> >         at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1704)
> >         at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> >         at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1626)
> >         at
> > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:421)
> >         at
> > org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:284)
> >         at
> > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest.<clinit>(HdfsFileProviderTest.java:95)
> > ...
> >
> > And it actually means, that "ls.exe" is not in the PATH. So yes,
> > the test does not work on all Windows systems, it requires at least
> > a ls.exe. I will remove the automatic running of those tests on the
> > Windows platform (again). (but with a better named profile).
> >
> > As a quick fix it should be enough to add any ls.exe, in my case it
> > was the portable git distribution (from github):
> >
> >
> > %LOCALAPPDATA%\GitHub\PortableGit_015aa71ef18c047ce8509ffb2f9e4bb0e3e73f13\bin\ls.exe
> >
> > Gruss
> > Bernd
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Am Tue, 10 Jun 2014 11:02:19 -0400 schrieb Gary Gregory
> > <[hidden email]>:
> >
> > > woa... cygwin? I have that installed but it does not help.
> > >
> > > How about this: can please you turn off HDFS testing on Windows
> > > like it was before.
> > >
> > > I'll be happy to test patches for you on my set up, Windows 7
> > > Professional 64-bit Service Pack 1.
> > >
> > > My understanding is that the HDFS jars we use do not run on
> > > Windows out of the box because they rely on calling OS commands
> > > that are *Nix specific.
> > >
> > > Gary
> > >
> > >
> > > On Tue, Jun 10, 2014 at 9:45 AM, Bernd Eckenfels
> > > <[hidden email]> wrote:
> > >
> > > > Hello,
> > > >
> > > > they do work for me, hm. Windows 7 x64 de. I will try some other
> > > > environments. Maybe it picks up some cygwin stuff or something
> > > > on my system?
> > > >
> > > > Gruss
> > > > Bernd
> > > >
> > > >
> > > > Am Mon, 9 Jun 2014 18:05:18 -0400
> > > > schrieb Gary Gregory <[hidden email]>:
> > > >
> > > > > Ecki enabled the HDFS tests on Windows but they sure fail for
> > > > > me, see below.
> > > > >
> > > > > Do they work for anyone else on Windows?
> > > > >
> > > > > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time
> > > > > elapsed: 3.681 sec <<< FAILURE! - in
> > > > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > > > Time elapsed: 3.68 sec  <<< ERROR!
> > > > > java.lang.ExceptionInInitializerError: null
> > > > >         at
> > > > >
> > > >
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:468)
> > > > >         at
> > > > >
> > > >
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:418)
> > > > >         at
> > > > >
> > > >
> > org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
> > > > >         at
> > > > > org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162)
> > > > > at
> > > > >
> > > >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
> > > > >         at
> > > > >
> > > >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590)
> > > > >         at
> > > > >
> > > >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1565)
> > > > >         at
> > > > >
> > > >
> > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:421)
> > > > >         at
> > > > > org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:284)
> > > > >         at
> > > > >
> > > >
> > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest.<clinit>(HdfsFileProviderTest.java:95)
> > > > >
> > > > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > > > Time elapsed: 3.681 sec  <<< ERROR!
> > > > > java.lang.NoClassDefFoundError: Could not initialize class
> > > > > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> > > > >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > > > > Method) at
> > > > >
> > > >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > > > >         at
> > > > >
> > > >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > >         at java.lang.reflect.Method.invoke(Method.java:606)
> > > > >         at
> > > > >
> > > >
> > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> > > > >         at
> > > > >
> > > >
> > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > > > >         at
> > > > >
> > > >
> > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> > > > >         at
> > > > >
> > > >
> > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
> > > > >         at
> > > > > org.junit.runners.ParentRunner.run(ParentRunner.java:309) at
> > > > >
> > > >
> > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
> > > > >         at
> > > > >
> > > >
> > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> > > > >         at
> > > > >
> > > >
> > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> > > > >         at
> > > > >
> > > >
> > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> > > > >         at
> > > > >
> > > >
> > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> > > > >         at
> > > > >
> > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> > > > >
> > > > > Gary
> > > > >
> > > > >
> > > >
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: [hidden email]
> > > > For additional commands, e-mail: [hidden email]
> > > >
> > > >
> > >
> > >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [hidden email]
> > For additional commands, e-mail: [hidden email]
> >
> >
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [VFS] HDFS failures on Windows

Bernd Eckenfels
In reply to this post by garydgregory
Am Tue, 17 Jun 2014 20:26:11 -0400
schrieb Gary Gregory <[hidden email]>:

> The build still breaks on Windows. Can you fix it please?

Sure, it is tracked under VFS-529 and I am on it.


> > I wanted to reproduce your problem, but had problems with the line
> > numbers in the stack trace. Can you check why you have different
> > ones? When I check it on my system the line numbers match the 1.2.1
> > sourcde.

That was actually caused by a local modification on my side (switched
to newer hadoop to see if it helps), so thats why the lines did not
match. But both versions have basically the same problem. In 2.x there
seems to be some better windows support but with specific setup
requirements. So I will see if they can be provided, avoided or if it
would need the auto-disable as well.

Gruss
Bernd



 And if I actually disable stack-trace-trimming(commited)

> > in surefire, it actually prints a helpful error:
> >
> > ...
> > Caused by: java.lang.RuntimeException: Error while running command
> > to get file permissions : java.io.IOException: Cannot run program
> > "ls": CreateProcess error=2, Das System kann die angegebene Datei
> > nicht finden at
> > java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) at
> > org.apache.hadoop.util.Shell.runCommand(Shell.java:200) at
> > org.apache.hadoop.util.Shell.run(Shell.java:182) at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
> >         at
> > org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:712) at
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:448)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:423)
> >         at
> > org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
> >         at
> > org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162)
> >         at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1704)
> >         at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> >         at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1626)
> >         at
> > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:421)
> >         at
> > org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:284)
> >         at
> > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest.<clinit>(HdfsFileProviderTest.java:95)
> > ...
> >
> > And it actually means, that "ls.exe" is not in the PATH. So yes,
> > the test does not work on all Windows systems, it requires at least
> > a ls.exe. I will remove the automatic running of those tests on the
> > Windows platform (again). (but with a better named profile).
> >
> > As a quick fix it should be enough to add any ls.exe, in my case it
> > was the portable git distribution (from github):
> >
> >
> > %LOCALAPPDATA%\GitHub\PortableGit_015aa71ef18c047ce8509ffb2f9e4bb0e3e73f13\bin\ls.exe
> >
> > Gruss
> > Bernd
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Am Tue, 10 Jun 2014 11:02:19 -0400 schrieb Gary Gregory
> > <[hidden email]>:
> >
> > > woa... cygwin? I have that installed but it does not help.
> > >
> > > How about this: can please you turn off HDFS testing on Windows
> > > like it was before.
> > >
> > > I'll be happy to test patches for you on my set up, Windows 7
> > > Professional 64-bit Service Pack 1.
> > >
> > > My understanding is that the HDFS jars we use do not run on
> > > Windows out of the box because they rely on calling OS commands
> > > that are *Nix specific.
> > >
> > > Gary
> > >
> > >
> > > On Tue, Jun 10, 2014 at 9:45 AM, Bernd Eckenfels
> > > <[hidden email]> wrote:
> > >
> > > > Hello,
> > > >
> > > > they do work for me, hm. Windows 7 x64 de. I will try some other
> > > > environments. Maybe it picks up some cygwin stuff or something
> > > > on my system?
> > > >
> > > > Gruss
> > > > Bernd

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [VFS] HDFS failures on Windows

garydgregory
On Tue, Jun 17, 2014 at 9:27 PM, Bernd Eckenfels <[hidden email]>
wrote:

> Am Tue, 17 Jun 2014 20:26:11 -0400
> schrieb Gary Gregory <[hidden email]>:
>
> > The build still breaks on Windows. Can you fix it please?
>
> Sure, it is tracked under VFS-529 and I am on it.
>
>
> > > I wanted to reproduce your problem, but had problems with the line
> > > numbers in the stack trace. Can you check why you have different
> > > ones? When I check it on my system the line numbers match the 1.2.1
> > > sourcde.
>
> That was actually caused by a local modification on my side (switched
> to newer hadoop to see if it helps), so thats why the lines did not
> match. But both versions have basically the same problem. In 2.x there
> seems to be some better windows support but with specific setup
> requirements. So I will see if they can be provided, avoided or if it
> would need the auto-disable as well.
>

I'll watch for the commits then.

Thank you,
Gary


> Gruss
> Bernd
>
>
>
>  And if I actually disable stack-trace-trimming(commited)
> > > in surefire, it actually prints a helpful error:
> > >
> > > ...
> > > Caused by: java.lang.RuntimeException: Error while running command
> > > to get file permissions : java.io.IOException: Cannot run program
> > > "ls": CreateProcess error=2, Das System kann die angegebene Datei
> > > nicht finden at
> > > java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) at
> > > org.apache.hadoop.util.Shell.runCommand(Shell.java:200) at
> > > org.apache.hadoop.util.Shell.run(Shell.java:182) at
> > >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
> > >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
> > >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
> > >         at
> > > org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:712) at
> > >
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:448)
> > >         at
> > >
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:423)
> > >         at
> > >
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
> > >         at
> > > org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162)
> > >         at
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1704)
> > >         at
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> > >         at
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1626)
> > >         at
> > >
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:421)
> > >         at
> > > org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:284)
> > >         at
> > >
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest.<clinit>(HdfsFileProviderTest.java:95)
> > > ...
> > >
> > > And it actually means, that "ls.exe" is not in the PATH. So yes,
> > > the test does not work on all Windows systems, it requires at least
> > > a ls.exe. I will remove the automatic running of those tests on the
> > > Windows platform (again). (but with a better named profile).
> > >
> > > As a quick fix it should be enough to add any ls.exe, in my case it
> > > was the portable git distribution (from github):
> > >
> > >
> > >
> %LOCALAPPDATA%\GitHub\PortableGit_015aa71ef18c047ce8509ffb2f9e4bb0e3e73f13\bin\ls.exe
> > >
> > > Gruss
> > > Bernd
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > Am Tue, 10 Jun 2014 11:02:19 -0400 schrieb Gary Gregory
> > > <[hidden email]>:
> > >
> > > > woa... cygwin? I have that installed but it does not help.
> > > >
> > > > How about this: can please you turn off HDFS testing on Windows
> > > > like it was before.
> > > >
> > > > I'll be happy to test patches for you on my set up, Windows 7
> > > > Professional 64-bit Service Pack 1.
> > > >
> > > > My understanding is that the HDFS jars we use do not run on
> > > > Windows out of the box because they rely on calling OS commands
> > > > that are *Nix specific.
> > > >
> > > > Gary
> > > >
> > > >
> > > > On Tue, Jun 10, 2014 at 9:45 AM, Bernd Eckenfels
> > > > <[hidden email]> wrote:
> > > >
> > > > > Hello,
> > > > >
> > > > > they do work for me, hm. Windows 7 x64 de. I will try some other
> > > > > environments. Maybe it picks up some cygwin stuff or something
> > > > > on my system?
> > > > >
> > > > > Gruss
> > > > > Bernd
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>


--
E-Mail: [hidden email] | [hidden email]
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory
Reply | Threaded
Open this post in threaded view
|

Re: [VFS] HDFS failures on Windows

Bernd Eckenfels
Hello,

I have commited the (new) no-hdfs profile, can you check if you still
see problems Gary?

I unfortunatelly think it is not possible to disable this profile on
the commandline (for testing if the needed binaries are installed by
hand).

Gruss
Bernd

Am Tue, 17 Jun 2014 22:22:18 -0400
schrieb Gary Gregory <[hidden email]>:

> On Tue, Jun 17, 2014 at 9:27 PM, Bernd Eckenfels
> <[hidden email]> wrote:
>
> > Am Tue, 17 Jun 2014 20:26:11 -0400
> > schrieb Gary Gregory <[hidden email]>:
> >
> > > The build still breaks on Windows. Can you fix it please?
> >
> > Sure, it is tracked under VFS-529 and I am on it.
> >
> >
> > > > I wanted to reproduce your problem, but had problems with the
> > > > line numbers in the stack trace. Can you check why you have
> > > > different ones? When I check it on my system the line numbers
> > > > match the 1.2.1 sourcde.
> >
> > That was actually caused by a local modification on my side
> > (switched to newer hadoop to see if it helps), so thats why the
> > lines did not match. But both versions have basically the same
> > problem. In 2.x there seems to be some better windows support but
> > with specific setup requirements. So I will see if they can be
> > provided, avoided or if it would need the auto-disable as well.
> >
>
> I'll watch for the commits then.
>
> Thank you,
> Gary
>
>
> > Gruss
> > Bernd
> >
> >
> >
> >  And if I actually disable stack-trace-trimming(commited)
> > > > in surefire, it actually prints a helpful error:
> > > >
> > > > ...
> > > > Caused by: java.lang.RuntimeException: Error while running
> > > > command to get file permissions : java.io.IOException: Cannot
> > > > run program "ls": CreateProcess error=2, Das System kann die
> > > > angegebene Datei nicht finden at
> > > > java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) at
> > > > org.apache.hadoop.util.Shell.runCommand(Shell.java:200) at
> > > > org.apache.hadoop.util.Shell.run(Shell.java:182) at
> > > >
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
> > > >         at
> > > > org.apache.hadoop.util.Shell.execCommand(Shell.java:461) at
> > > > org.apache.hadoop.util.Shell.execCommand(Shell.java:444) at
> > > > org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:712) at
> > > >
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:448)
> > > >         at
> > > >
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:423)
> > > >         at
> > > >
> > org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
> > > >         at
> > > > org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162)
> > > >         at
> > > >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1704)
> > > >         at
> > > >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> > > >         at
> > > >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1626)
> > > >         at
> > > >
> > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:421)
> > > >         at
> > > > org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:284)
> > > >         at
> > > >
> > org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest.<clinit>(HdfsFileProviderTest.java:95)
> > > > ...
> > > >
> > > > And it actually means, that "ls.exe" is not in the PATH. So yes,
> > > > the test does not work on all Windows systems, it requires at
> > > > least a ls.exe. I will remove the automatic running of those
> > > > tests on the Windows platform (again). (but with a better named
> > > > profile).
> > > >
> > > > As a quick fix it should be enough to add any ls.exe, in my
> > > > case it was the portable git distribution (from github):
> > > >
> > > >
> > > >
> > %LOCALAPPDATA%\GitHub\PortableGit_015aa71ef18c047ce8509ffb2f9e4bb0e3e73f13\bin\ls.exe
> > > >
> > > > Gruss
> > > > Bernd
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Am Tue, 10 Jun 2014 11:02:19 -0400 schrieb Gary Gregory
> > > > <[hidden email]>:
> > > >
> > > > > woa... cygwin? I have that installed but it does not help.
> > > > >
> > > > > How about this: can please you turn off HDFS testing on
> > > > > Windows like it was before.
> > > > >
> > > > > I'll be happy to test patches for you on my set up, Windows 7
> > > > > Professional 64-bit Service Pack 1.
> > > > >
> > > > > My understanding is that the HDFS jars we use do not run on
> > > > > Windows out of the box because they rely on calling OS
> > > > > commands that are *Nix specific.
> > > > >
> > > > > Gary
> > > > >
> > > > >
> > > > > On Tue, Jun 10, 2014 at 9:45 AM, Bernd Eckenfels
> > > > > <[hidden email]> wrote:
> > > > >
> > > > > > Hello,
> > > > > >
> > > > > > they do work for me, hm. Windows 7 x64 de. I will try some
> > > > > > other environments. Maybe it picks up some cygwin stuff or
> > > > > > something on my system?
> > > > > >
> > > > > > Gruss
> > > > > > Bernd
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [hidden email]
> > For additional commands, e-mail: [hidden email]
> >
> >
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]