June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Adam D. Ruppe | "Adam D. Ruppe" <destructionator@gmail.com> wrote in message news:it97ee$ome$1@digitalmars.com... > Nick Sabalausky wrote: >> RDMD never needs to invoke DMD more than twice. > > rdmd also doesn't attempt to download libraries. I know, but that's beside the point. You may need a few invocations of DMD or RDMD to *get* dependencies you don't already have (but *at most* only one per library, less if any part of the chain has more than one dependency) but *once you have them*, then *one* call to DMD will find *all* the files needed. I'll use an example: MyApp: - main.d: Imports 'helper' and 'libA.all' - helper.d: Imports 'libB.all' LibA: - libA/all.d: Imports 'libA.util' - libA/util.d: Imports nothing of interest. LibB: - libB/all.d: Imports 'libB.baz' - libB/baz.d: Imports 'libC' LibC: - libC.d: Imports nothing of interest. Now, you *only* have the source for MyApp, none of the libs. You built it: $ nicks-build-tool main.d -of:MyApp.exe Invoking dmd to find deps of main.d... Deps: helper, libA.all, libB.all Missing: libA.all, libB.all Checking if deps can be downloaded... libA.all: Exists in LibA libB.all: Exists in LibB Downloading LibA...done Downloading LibB...done Invoking dmd to find deps of main.d... Deps: helper, libA.all, libB.all, libA.util, libB.baz, libC Missing: libC Checking if deps can be downloaded... libC.d: Exists in LibC Downloading LibC...done Invoking dmd to find deps of main.d... Deps: helper, libA.all, libB.all, libA.util, libB.baz, libC Missing: {none} Checking if need to rebuild...yes, MyApp.exe missing Invoking dmd to compile everything... Done. Now you make changes to MyApp and want to build again: $ nicks-build-tool main.d -of:MyApp.exe Invoking dmd to find deps of main.d... Deps: helper, libA.all, libB.all, libA.util, libB.baz, libC Missing: {none} Checking if need to rebuild...yes, main.d and helper.d changed. Invoking dmd to compile everything... Done. DMD only needs to be invoked a small handful of times, and only when a library is missing. However, IMO, it would be far better to have dependency metadata for each lib/project rather than picking through the source and inferring packages. | |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Nick Sabalausky | Nick Sabalausky wrote: > You import a module *from* a library. I don't think libraries, aside from individual modules, should even exist in D, since you can and should put all interdependent stuff in a single file. If something is moved to a separate module, that means it has usefulness independent from original module - if not, it wouldn't be factored out to begin with... A module might import a whole web of modules, but each one it imports would still satisfy the definition of module - something that's useful to more than one other modules. Suppose I write a database "library". It is composed of three modules: database.d which is the common interface, and then mysql.d and sqlite.d that implement that interface for their respective underlying engines. The only shared component is that database.d interface. Does that warrant making a library package for them? I don't think so. Suppose a third party wants to implement access to Microsoft SQL via my same interface. If modules are the building blocks, that's easy for him. He just "import adams.database;" and puts the file up. He doesn't have to petition his module to be adopted by my database library while still being compatible with it. A user can then pull any one of the implementation modules: adams.mysql or adams.sqlite or other_fellows.mssql and it just works. You could do the same thing with a library concept, but would you? Do you download a whole library just so you can implement a shared interface that is otherwise unrelated? > Also, if a library needs any special "setup" step, then this won't even work anyway. This is true, but I see it as a strike *against* packaged libraries, not for it. Going back to the database interface. Suppose I only offered database.d as part of a "Adam's Database Library" package, which, since it offers mysql, lists mysql as a dependency. Then Microsoft implements my interface. Someone who wants to use Microsoft's library is told it depends on mine... which depends on mysql. So it prompts them to install mysql to use mssql! That's awful. To fix this, you might say "library mysql depends on library database".... but, taking that to it's logical conclusion, library == module anyway. BTW, you might be thinking, if you import my mysql module, how does it handle the C library it depends on? Answer: it doesn't. That's the library user's responsibility. If he's on CentOS, he can yum install mysql. If he's on Debian, he can apt-get install mysql. A D package manager shouldn't step on the toes of existing package managers. Lord knows they have enough problems of their own without dealing with our interop. > I think a substantial number of people (*especially* windows > users - it's unrealistic to expect windows users to use anything > like junctions) would expect to be able to use an already- > installed library without special setup > for every single project that uses it. The download program could automatically make locally available libs just work without hitting the network too. > A central repo per se, isn't really a good idea. Agreed. I just like having the option there for max convenience in getting started. The central repo might just provide a list of other repos to try. > I think things like apt-get and 0install are very good models for us to follow Blargh. I often think I'm the last person people should listen to when it comes to package management because the topic always brings three words to my mind: "shitload of fuck". I've never seen one that I actually like. I've seen only two that I don't hate with the burning passion of 1,000 suns, and both of them are pretty minimal (Slackware's old tgz system and my build.d. Note: they both suck, just not as much as the alternatives) On the other hand, this is exactly why I jump in these threads. There's some small part of me that thinks maybe, just maybe, we can be the first to create a system that's not a steaming pile of putrid dogmeat. Some specific things I hate about the ones I've used: 1) What if I want a version that isn't in the repos? Installing a piece of software myself almost *always* breaks something since the package manager is too stupid to even realize there's a potential conflict and just does its own thing. This was one of biggest problems with Ruby gems when I was forced to use it a few years back and it comes up virtually every time I have to use yum. This is why I really like it only downloading a module if it's missing. If I put the module in myself, it's knows to not bother with it - the compile succeeds, so there's no need to invoke the downloader at all. 2) What if I want to keep an old version for one app, but have the new version for another? This is one reason why my program default to local subdirectories - so there'd be no risk of stepping on other apps at all. 3) Can I run it as non-root? CPAN seemed almost decent to me until I had to use it on a client's shared host server. It failed miserably. (this was 2006, like with gems, maybe they fixed it since then.) If it insists on installing operating system files as a dependency to my module, it's evil. 4) Is it going to suddenly stop working if I leave it for a few months? It's extremely annoying to me when every command just complains about 404 (just run yum update! if it's so easy, why doesn't the stupid thing do it itself?). This is one reason why I really want an immutable repository. Append to it if you want, but don't invalidate my slices plz. Another one of my big problems with Ruby gems was that it was extremely painful to install on other operating systems. At the time, installing it on FreeBSD and Solaris wasted way too much of my time. A good package manager should be OS agnostic in installation, use, and implementation. It's job is to fetch me some D stuff to use. Leave the operating system related stuff to me. I will not give it root under any circumstances - a compiler and build tool has no legitimate requirement for it. (btw if it needs root because some user wanted a system wide thing, that's ok. Just never *require* it.) | |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Nick Sabalausky | On 06/14/2011 09:29 PM, Nick Sabalausky wrote:
> "Adam D. Ruppe"<destructionator@gmail.com> wrote in message
> news:it93ah$ekb$1@digitalmars.com...
>>> After the initial "gather everything and
>>> build" build, all it would ever have to do is exactly what RDMD
>>> already does right now: Run DMD once to find the deps, check them
>>> to see if anything needs rebuilt, and if so, run DMD the second
>>> time to build.
>>
>> Does rdmd handle cases where the dependencies have dependencies?
>>
>> Suppose app.d imports foo.d which imports bar.d
>>
>> dmd app.d
>> can't find module in foo.d
>>
>> retry:
>>
>> dmd app.d foo.d
>> can't find module bar.d
>>
>> try again:
>>
>> dmd app.d foo.d bar.d
>>
>> success.
>>
>>
>> Is it possible to cut out any one of those steps without caching
>> that third dmd line? Until you try to compile foo.d, it can't
>> know bar.d is required...
>
> RDMD never needs to invoke DMD more than twice. Once to find "all" the
> dependencies, and once to do the actual compile. When DMD is run to find a
> file's dependencies, it finds the *entire* dependency graph, not just one
> step of it.
It can do so because all files are present. The remote tool can't do that.
Andrei
| |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | "Andrei Alexandrescu" <SeeWebsiteForEmail@erdani.org> wrote in message news:it9dt7$16fs$1@digitalmars.com... > On 06/14/2011 09:29 PM, Nick Sabalausky wrote: >> "Adam D. Ruppe"<destructionator@gmail.com> wrote in message news:it93ah$ekb$1@digitalmars.com... >>>> After the initial "gather everything and >>>> build" build, all it would ever have to do is exactly what RDMD >>>> already does right now: Run DMD once to find the deps, check them >>>> to see if anything needs rebuilt, and if so, run DMD the second >>>> time to build. >>> >>> Does rdmd handle cases where the dependencies have dependencies? >>> >>> Suppose app.d imports foo.d which imports bar.d >>> >>> dmd app.d >>> can't find module in foo.d >>> >>> retry: >>> >>> dmd app.d foo.d >>> can't find module bar.d >>> >>> try again: >>> >>> dmd app.d foo.d bar.d >>> >>> success. >>> >>> >>> Is it possible to cut out any one of those steps without caching that third dmd line? Until you try to compile foo.d, it can't know bar.d is required... >> >> RDMD never needs to invoke DMD more than twice. Once to find "all" the >> dependencies, and once to do the actual compile. When DMD is run to find >> a >> file's dependencies, it finds the *entire* dependency graph, not just one >> step of it. > > It can do so because all files are present. The remote tool can't do that. > Right. All I'm saying is: 1. The remote tool *can* do that **after** the first build. 2. On the first build, the number of times DMD needs to be invoked is fairly limited. As far as finding deps goes (and not counting any special lib-specific setup steps), the upper bound is 1+(number of libs needed). Of course, that's if you do things one-library-at-a-time. If you try to do things one-file-at-a-time (which is of dubious benefit), *then* the number of DMD invocations would explode. | |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Adam D. Ruppe | "Adam D. Ruppe" <destructionator@gmail.com> wrote in message news:it9ch1$140r$1@digitalmars.com... > Nick Sabalausky wrote: >> You import a module *from* a library. > > I don't think libraries, aside from individual modules, should even exist in D, since you can and should put all interdependent stuff in a single file. > Well, even if that's a valid point, the problem still remains that many people don't feel that way and many (most?) projects don't work that way. Should we just leave those people/projects out in the dark? Your approach, on the other hand, can be achieved by making each module a separate library. > > You could do the same thing with a library concept, but would you? Do you download a whole library just so you can implement a shared interface that is otherwise unrelated? > Libraries are small and disk space/bandwidth is cheap. And note that that's being said by the #1 "old-hardware" guy around here. > >> Also, if a library needs any special "setup" step, then this won't even work anyway. > > This is true, but I see it as a strike *against* packaged libraries, not for it. > Even if it's inappropriate for most libraries, such as your example, I do think there are good uses for it. But regardless, operating a a per-lib basis instead of per-file doesn't *force* us to support such a feature if we decided we didn't want it. >> I think a substantial number of people (*especially* windows >> users - it's unrealistic to expect windows users to use anything >> like junctions) would expect to be able to use an already- >> installed library without special setup >> for every single project that uses it. > > The download program could automatically make locally available libs just work without hitting the network too. > I'm just opposed to "duplicate every lib in every project that uses it" being the default. >> I think things like apt-get and 0install are very good models for us to follow > > Blargh. I often think I'm the last person people should listen to when it comes to package management because the topic always brings three words to my mind: "shitload of fuck". > > I've never seen one that I actually like. I've seen only two that I don't hate with the burning passion of 1,000 suns, and both of them are pretty minimal (Slackware's old tgz system and my build.d. Note: they both suck, just not as much as the alternatives) > > On the other hand, this is exactly why I jump in these threads. > There's some small part of me that thinks maybe, just maybe, > we can be the first to create a system that's not a steaming pile > of putrid dogmeat. > > > Some specific things I hate about the ones I've used: > > 1) What if I want a version that isn't in the repos? Installing a piece of software myself almost *always* breaks something since the package manager is too stupid to even realize there's a potential conflict and just does its own thing. > > This was one of biggest problems with Ruby gems when I was forced to use it a few years back and it comes up virtually every time I have to use yum. > > This is why I really like it only downloading a module if it's missing. If I put the module in myself, it's knows to not bother with it - the compile succeeds, so there's no need to invoke the downloader at all. > > > 2) What if I want to keep an old version for one app, but have > the new version for another? This is one reason why my program > default to local subdirectories - so there'd be no risk of stepping > on other apps at all. > > > 3) Can I run it as non-root? CPAN seemed almost decent > to me until I had to use it on a client's shared host server. It > failed miserably. (this was 2006, like with gems, maybe they > fixed it since then.) > > If it insists on installing operating system files as a dependency to my module, it's evil. > > > 4) Is it going to suddenly stop working if I leave it for a few months? It's extremely annoying to me when every command just complains about 404 (just run yum update! if it's so easy, why doesn't the stupid thing do it itself?). > > This is one reason why I really want an immutable repository. Append to it if you want, but don't invalidate my slices plz. > > > > Another one of my big problems with Ruby gems was that it was extremely painful to install on other operating systems. At the time, installing it on FreeBSD and Solaris wasted way too much of my time. > > A good package manager should be OS agnostic in installation, use, and implementation. It's job is to fetch me some D stuff to use. Leave the operating system related stuff to me. I will not give it root under any circumstances - a compiler and build tool has no legitimate requirement for it. > > (btw if it needs root because some user wanted a system wide thing, that's ok. Just never *require* it.) These are all very good points that I think we should definitely keep in mind when designing this system. Also, have you looked at 0install? I think it may match a lot of what you say you want here (granted I've never actually used it). It doesn't require admin to install things, for instance. And it keeps different versions of the same lib instead of replacing version N with version N+1. My point about apt-get and 0install being good models for us to follow was really referring more to: Ok, I want to install "XYZ", so I tell it to install XYZ and the damn thing *just works*, I don't have to fuck around with the dependencies myself, the machine does it. I don't have to give a shit what requires what, or what version. The damn thing just does it. In fact, that was one of the main reasons I gave up on Linux the first time I tried it. Installing anything was idiotically convoluted. Despite any shortcomings they may have, things like apt-get at least make the situation tolerable. | |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Adam D. Ruppe | Adam D. Ruppe wrote:
> Nick Sabalausky wrote:
>> I think things like apt-get and 0install are very good models for
>> us to follow
>
> Blargh. I often think I'm the last person people should listen
> to when it comes to package management because the topic always
> brings three words to my mind: "shitload of fuck".
>
> I've never seen one that I actually like. I've seen only two
> that I don't hate with the burning passion of 1,000 suns, and
> both of them are pretty minimal (Slackware's old tgz system and
> my build.d. Note: they both suck, just not as much as the
> alternatives)
>
> On the other hand, this is exactly why I jump in these threads.
> There's some small part of me that thinks maybe, just maybe,
> we can be the first to create a system that's not a steaming pile
> of putrid dogmeat.
>
>
> Some specific things I hate about the ones I've used:
[snip]
This seems to me to be very similar to the situation with search engines prior to google. Remember AltaVista, where two out of every three search results were a broken link?
Seems to me, that what's ultimately needed is a huge compatibility matrix, containing every version of every library, and its compatibility with every version of every other library. Or something like that.
Package manager shouldn't silently use packages which have never been used with each other before.
It's a very difficult problem, I think, but at least package owners could manually supply a list of other packages they've tested with.
| |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | On 6/14/11 8:53 PM, Andrei Alexandrescu wrote:
> http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
>
> Destroy.
>
>
> Andrei
I think something like CPAN or RubyGems should be done in D, now :-)
I think it would give a big boost to D in many ways:
* Have a repository of libraries searchable by command line and retrievable by command line. Many libraries provider can be registered, like dsource or others.
* Then you can have a program that downloads all these libraries, one by one, and see if they compile, link, etc., correctly. If not, you've broken some of their code. You can choose to break it and notify them, or just not to break it.
A problem I see in D now is that it's constantly changing (ok, the spec is frozen, but somehow old libraries stop working) and this will give a lot of stability to D.
But please, don't reinvent the wheel. Solutions for this already exist and work pretty well.
| |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Adam D. Ruppe | On Tue, 14 Jun 2011 16:47:01 -0400, Adam D. Ruppe <destructionator@gmail.com> wrote: > BTW, I don't think it should be limited to just passing a > url to the helper program. > > I'd do it something like this: > > dget module.name url_from_pragma I still don't like the url being stored in the source file -- where *specifically* on the network to get the file has nothing to do with compiling the code, and fixing a path problem shouldn't involve editing a source file -- there is too much risk. For comparison, you don't have to specify a full path to the compiler of where to get modules, they are specified relative to the configured include paths. I think this model works well, and we should be able to re-use it for this purpose also. You could even just use urls as include paths: -Ihttp://www.dsource.org/projects/dcollections/import -Steve | |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | On Tue, 14 Jun 2011 16:26:34 -0400, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote: > On 6/14/11 2:34 PM, Robert Clipsham wrote: >> On 14/06/2011 20:07, Andrei Alexandrescu wrote: >>> On 6/14/11 1:22 PM, Robert Clipsham wrote: >>>> On 14/06/2011 14:53, Andrei Alexandrescu wrote: >>>>> http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11 >>>>> >>>>> Destroy. >>>>> >>>>> >>>>> Andrei >>>> >>>> This doesn't seem like the right solution to the problem - the correct >>>> solution, in my opinion, is to have a build tool/package manager handle >>>> this, not the compiler. >>>> >>>> Problems I see: >>>> * Remote server gets hacked, everyone using the library now >>>> executes malicious code >>> >>> This liability is not different from a traditional setup. >> >> Perhaps, but with a proper package management tool this can be avoided >> with sha sums etc, this can't happen with a direct get. Admittedly this >> line of defense falls if the intermediate server is hacked. > > You may want to update the proposal with the appropriate security artifacts. > > [snip] >> I don't have a problem with automatically downloading source during a >> first build, I do see a problem with getting the compiler to do it >> though. I don't believe the compiler should have anything to do with >> getting source code, unless the compiler also becomes a package manager >> and build tool. > > Would you agree with the setup in which the compiler interacts during compilation with an external executable, placed in the same dir as the compiler, and with this spec? > > dget "url" I'd rather have it be dget "includepath" module1 [module2 module3 ...] Then use -I to specify include paths that are url forms. Then you specify the possible network include paths with: -Ihttp://path/to/source I think this goes well with the current dmd import model. dget would be responsible for caching and updating the cache if the remote file changes. -Steve | |||
June 15, 2011 Re: DIP11: Automatic downloading of libraries | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Steven Schveighoffer | On Wed, 15 Jun 2011 08:57:04 -0400, Steven Schveighoffer wrote:
> On Tue, 14 Jun 2011 16:26:34 -0400, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote:
>> Would you agree with the setup in which the compiler interacts during compilation with an external executable, placed in the same dir as the compiler, and with this spec?
>>
>> dget "url"
>
> I'd rather have it be dget "includepath" module1 [module2 module3 ...]
>
> Then use -I to specify include paths that are url forms. Then you specify the possible network include paths with:
>
> -Ihttp://path/to/source
>
> I think this goes well with the current dmd import model.
>
> dget would be responsible for caching and updating the cache if the remote file changes.
++vote;
-Lars
| |||
Copyright © 1999-2021 by the D Language Foundation
Permalink
Reply