June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | On Wed, 15 Jun 2011 10:38:28 -0400, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote: > On 6/15/11 8:33 AM, Steven Schveighoffer wrote: >> I can't really think of any other issues. > > Allow me to repeat: the scheme as you mention it is unable to figure and load dependent remote libraries for remote libraries. It's essentially a flat scheme in which you know only the top remote library but nothing about the rest. > > The dip takes care of that by using transitivity and by relying on the presence of dependency information exactly where it belongs - in the dependent source files. Separating that information from source files has two liabilities. First, it breaks the whole transitivity thing. Second, it adds yet another itsy-bitsy pellet of metadata/config/whatevs files that need to be minded. I just don't see the advantage of imposing that. Yes, these are good points. But I think Dmitry brought up good points too (how do you specify that TreeMap.d needs to be compiled too?). One possible solution is a central repository of code. So basically, you can depend on other projects as long as they are sanely namespaced and live under one include path. I think dsource should provide something like this. For example: http://www.dsource.org/import then if you wanted dcollections.TreeMap, the import would be: http://www.dsource.org/import/dcollections/TreeMap.d Of course, that still doesn't solve Dmitry's problem. We need to think of a way to do that too. Still thinking.... -Steve | |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | On Wed, 15 Jun 2011 10:33:21 -0400, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote: >> dget would just add the appropriate path: >> >> import dcollections.TreeMap => >> get >> http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.d >> hm.. doesn't work >> get >> http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.di >> ok, there it is! > > This assumes the URL contains the package prefix. That would work, but imposes too much on the URL structure. I find the notation -Upackage=url more general. Look at the url again, I'll split out the include path and the import: [http://www.dsource.org/projects/dcollections/import] / [dcollections/TreeMap.di] There is nothing being assumed by dget. It could try and import dcollections.TreeMap from some other remote path as well, and fail. It follows the same rules as the current import scheme, just with urls instead of paths. -Steve | |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Robert Clipsham | On 6/15/11 9:56 AM, Robert Clipsham wrote: > On 15/06/2011 15:33, Andrei Alexandrescu wrote: >> On 6/15/11 9:13 AM, Steven Schveighoffer wrote: >>> We have been getting along swimmingly without pragmas for adding local >>> include paths. Why do we need to add them using pragmas for network >>> include paths? >> >> That doesn't mean the situation is beyond improvement. If I had my way >> I'd add pragma(liburl) AND pragma(libpath). > > pragma(lib) doesn't (and can't) work as it is, why do you want to add > more useless pragmas? Then we should yank it or change it. That pragma was defined in a completely different context from today's, and right now we have a much larger user base to draw experience and insight from. > Command line arguments are the correct way to go > here. Why? At this point enough time has been collectively spent on this that I'm genuinely curious to find a reason that would have me "huh, haven't thought about it that way. Fine, no need for the dip." > Not to mention that paths won't be standardized across machines > most likely so the latter would be useless. version() for the win. >>> Also, I don't see the major difference in someone who's making a piece >>> of software from adding the include path to their source file vs. adding >>> it to their build script. >> >> Because in the former case the whole need for a build script may be >> obviated. That's where I'm trying to be. > > This can't happen in a lot of cases, eg if you're interfacing with a > scripting language, you need certain files automatically generating > during build etc. Sure. For those cases, use tools. For everything else, there's liburl. > Admittedly, for the most part, you'll just want to be > able to build libraries given a directory or an executable given a file > with _Dmain() in. That's the spirit. This is what the proposal aims at: you have the root file and the process takes care of everything - no configs, no metadata, no XML info, no command-line switches, no fuss, no muss. With such a feature, "hello world" equivalents demoing dcollections, qt, mysql (some day), etc. etc. will be as simple as few-liners that anyone can download and compile flag-free. I find it difficult to understand how only a few find that appealing. > There'll still be a lot of cases where you want to > specify some things to be dynamic libs, other static libs, and what if > any of it you want in a resulting binary. Sure. But won't you think it's okay to have the DIP leave such cases to other tools without impeding them in any way? >> Sounds good. I actually had the same notion, just forgot to mention it >> in the dip (fixed). > > I'd agree with Steven that we need command line arguments for it, I > completely disagree about pragmas though given that they don't work (as > mentioned above). Just because I know you're going to ask: > > # a.d has a pragma(lib) in it > $ dmd a.d > $ dmd b.d > $ dmd a.o b.o > <Linker errors> > > This is unavoidable unless you put metadata in the object files, and > even then you leave clutter in the resulting binary, unless you specify > that the linker should remove it (I don't know if it can). I now understand, thanks. So I take it a compile-and-link command would succeed, whereas a compile-separately succession of commands wouldn't? That wouldn't mean the pragma doesn't work, just that it only works under certain build scenarios. >> This assumes the URL contains the package prefix. That would work, but >> imposes too much on the URL structure. I find the notation -Upackage=url >> more general. > > I personally think there should be a central repository listing packages > and their URLs etc, which massively simplifies what needs passing on a > command line. Eg -RmyPackage would cause myPackage to be looked up on > the central server, which will have the relevant URL etc. > > Of course, there should be some sort of override method for private > remote servers. That is tantamount to planting a flag in the distributed dmd.conf. Sounds fine. >>> As I said in another post, you could also specify a zip file or tarball >>> as a base path, and the whole package is downloaded instead. We may need >>> some sort of manifest instead in order to verify the import will be >>> found instead of downloading the entire package to find out. >> >> Sounds cool. > > I don't believe this tool should exist without compression being default. Hm. Well fine. Andrei | |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Steven Schveighoffer | On 6/15/11 10:07 AM, Steven Schveighoffer wrote: > On Wed, 15 Jun 2011 10:33:21 -0400, Andrei Alexandrescu > <SeeWebsiteForEmail@erdani.org> wrote: > >>> dget would just add the appropriate path: >>> >>> import dcollections.TreeMap => >>> get >>> http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.d >>> >>> hm.. doesn't work >>> get >>> http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.di >>> >>> ok, there it is! >> >> This assumes the URL contains the package prefix. That would work, but >> imposes too much on the URL structure. I find the notation >> -Upackage=url more general. > > Look at the url again, I'll split out the include path and the import: > > [http://www.dsource.org/projects/dcollections/import] / > [dcollections/TreeMap.di] I understood the first time. Yes, so it imposes on the url structure that it ends with /dcollections/. > There is nothing being assumed by dget. It could try and import > dcollections.TreeMap from some other remote path as well, and fail. It > follows the same rules as the current import scheme, just with urls > instead of paths. I don't think it's a good idea to search several paths for a given import. One import should map to two download attempts: one for the .di, next for .d. Andrei | |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | On 6/14/11 6:53 AM, Andrei Alexandrescu wrote:
> http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
>
> Destroy.
I keep thinking that if we build a separate dget, dmd could call it even if there weren't a URL embedded in the source. If dget had a list of central repositories then it could simply look in them for the package/module and compilation would magically work with or without a pragma.
In any case I suspect that a more formal versioning system is needed. One way of supporting versions would be to make dget aware of source control systems like svn, mercurial and git which support tags.
The pragma could support source control URLs, and could also include an optional version. dget could be aware of common source control clients, and could try calling them if installed, looking for the code tagged with the provided version. If no version were specified then head/master would be used.
| |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | On Wed, 15 Jun 2011 11:23:58 -0400, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote: > On 6/15/11 10:07 AM, Steven Schveighoffer wrote: >> On Wed, 15 Jun 2011 10:33:21 -0400, Andrei Alexandrescu >> <SeeWebsiteForEmail@erdani.org> wrote: >> >>>> dget would just add the appropriate path: >>>> >>>> import dcollections.TreeMap => >>>> get >>>> http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.d >>>> >>>> hm.. doesn't work >>>> get >>>> http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.di >>>> >>>> ok, there it is! >>> >>> This assumes the URL contains the package prefix. That would work, but >>> imposes too much on the URL structure. I find the notation >>> -Upackage=url more general. >> >> Look at the url again, I'll split out the include path and the import: >> >> [http://www.dsource.org/projects/dcollections/import] / >> [dcollections/TreeMap.di] > > I understood the first time. Yes, so it imposes on the url structure that it ends with /dcollections/. No, that was just because dcollections' home base is www.dsource.org.projects/dcollections. I.e. the fact that the import starts with dcollections and the include path contains dcollections are not significant. dget tries all paths just like dmd tries all paths. > >> There is nothing being assumed by dget. It could try and import >> dcollections.TreeMap from some other remote path as well, and fail. It >> follows the same rules as the current import scheme, just with urls >> instead of paths. > > I don't think it's a good idea to search several paths for a given import. One import should map to two download attempts: one for the .di, next for .d. In the ideas I've outlined, dget is given include paths, not what packages are in those include paths. So the two attempts are per import path. However, this only happens on first usage, after that, they are cached, so there is no try-and-fail required. I can see why you want this, but in order for it to fit in with the current import path scheme, it would have to be this way. Otherwise, you'd need a different switch besides -I to implement it (or the pragma). Not that those cannot be implemented, but the simplistic "just specify a network include path like you would a local one" has appeal to me. -Steve | |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Steven Schveighoffer | On 6/15/11 10:39 AM, Steven Schveighoffer wrote:
> I can see why you want this, but in order for it to fit in with the
> current import path scheme, it would have to be this way. Otherwise,
> you'd need a different switch besides -I to implement it (or the
> pragma). Not that those cannot be implemented, but the simplistic "just
> specify a network include path like you would a local one" has appeal to
> me.
I understand the appeal, but I also understand the inherent limitations of HTTP versus a file system. I don't think it's wise pounding the former into the shape of the other.
Andrei
| |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Steven Schveighoffer | It occurs to me that you ought to treat network things identically to local modules in every way... dmd app.d ../library.d just works. I propose: dmd app.d http://whatever.com/library.d should just work too - dmd would need only to recognize module name starts with xxxx:// and pass it to the dget program to translate. | |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | Andrei Alexandrescu wrote: >> pragma(lib) doesn't (and can't) work as it is, why do you want to add >> more useless pragmas? >Then we should yank it or change it. Please no! pragma(lib) rocks. Just because it doesn't work in *all* cases doesn't mean we should get rid of it. The presence of a pragma(lib) doesn't break separate compilation, since you can still do it on the command line. Granted, it doesn't help there, but it doesn't hurt either. It does help for the simple case where you want "dmd myapp.d" to just work. >> I don't believe this tool should exist without compression being default. > Hm. Well fine. Note that compression for single files it built into the http protocol. If you gzip a file ahead of time and serve it up with Content-transfer-encoding: gzip in the headers (or something like that), it is supposed to be transparently un-gzipped by the user agent. All the browsers do it, but I'm not sure if libcurl does (but it prolly does, it's pretty complete). Regardless, even if not, it's trivial to implement ourselves. Compressing an entire package depends on agreeing what a package is, but just serving up .zips of common modules works... importing modules from a library if you must :) The dget is free to download it and unzip to it's cache transparently before passing the path to dmd. | |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | > I understand the appeal, but I also understand the inherent limitations of HTTP versus a file system. I don't think it's wise pounding the former into the shape of the other.
AIUI, you can't search for a module on a local filesystem either, aside from guessing a name. In D, a module name doesn't necessarily have to match the file name.
If you import foo.bar;, you have two options:
call it foo/bar.d
pass whatever.d that has "module foo.bar;" in it on the command line explicitly.
Being able to do a list directory entries feature doesn't really help locate a given named D module anyway.
| |||
Copyright © 1999-2021 by the D Language Foundation
Permalink
Reply