Thread overview
grain - D Language for Deep Learning
Apr 24, 2019
Fynn Schröder
Apr 24, 2019
jmh530
Apr 24, 2019
jmh530
Apr 24, 2019
Shigeki Karita
Apr 24, 2019
jmh530
Apr 26, 2019
Shigeki Karita
Apr 26, 2019
jmh530
April 23, 2019
Google Alerts just found these slides:

https://speakerdeck.com/shigekikarita/grain-d-language-for-deep-learning

Does anyone have more information about this?
April 24, 2019
On Wednesday, 24 April 2019 at 00:02:42 UTC, Andrei Alexandrescu wrote:
> Google Alerts just found these slides:
>
> https://speakerdeck.com/shigekikarita/grain-d-language-for-deep-learning
>
> Does anyone have more information about this?

It's an autograd library for dynamic neural networks based on mir and cuda. See GitHub for more details: https://github.com/ShigekiKarita/grain
I've tried it and it works great -- although it's far from feature complete in comparison to e.g. PyTorch.

April 24, 2019
On Wednesday, 24 April 2019 at 06:13:13 UTC, Fynn Schröder wrote:
> [snip]
>
> It's an autograd library for dynamic neural networks based on mir and cuda. See GitHub for more details: https://github.com/ShigekiKarita/grain
> I've tried it and it works great -- although it's far from feature complete in comparison to e.g. PyTorch.

Cool. Thanks for the summary.
April 24, 2019
On Wednesday, 24 April 2019 at 10:51:08 UTC, jmh530 wrote:
> On Wednesday, 24 April 2019 at 06:13:13 UTC, Fynn Schröder wrote:
>> [snip]
>>
>> It's an autograd library for dynamic neural networks based on mir and cuda. See GitHub for more details: https://github.com/ShigekiKarita/grain
>> I've tried it and it works great -- although it's far from feature complete in comparison to e.g. PyTorch.
>
> Cool. Thanks for the summary.

Hmm, it looks like there are comparisons between it and chainer, pytorch, and tensorflow. It might be interesting to compare it to some other static autograd libraries. The only one I can think of off the top of my head is Stan's [1], though that's designed more for probabilistic programming than neural networks.

[1] https://github.com/stan-dev/math


April 24, 2019
On Wednesday, 24 April 2019 at 10:56:54 UTC, jmh530 wrote:
> On Wednesday, 24 April 2019 at 10:51:08 UTC, jmh530 wrote:
>> On Wednesday, 24 April 2019 at 06:13:13 UTC, Fynn Schröder wrote:
>>> [snip]
>>>
>>> It's an autograd library for dynamic neural networks based on mir and cuda. See GitHub for more details: https://github.com/ShigekiKarita/grain
>>> I've tried it and it works great -- although it's far from feature complete in comparison to e.g. PyTorch.
>>
>> Cool. Thanks for the summary.
>
> Hmm, it looks like there are comparisons between it and chainer, pytorch, and tensorflow. It might be interesting to compare it to some other static autograd libraries. The only one I can think of off the top of my head is Stan's [1], though that's designed more for probabilistic programming than neural networks.
>
> [1] https://github.com/stan-dev/math

I see. I'm interested in Stan that is the best library for probabilistic models but it lacks of GPU computation. Therefore, I plan to add some probabilistic programming paradigm into grain like pytorch (pyro) and tensorflow (tf probability).
April 24, 2019
On Wednesday, 24 April 2019 at 16:33:00 UTC, Shigeki Karita wrote:
> [snip]
>
> I see. I'm interested in Stan that is the best library for probabilistic models but it lacks of GPU computation. Therefore, I plan to add some probabilistic programming paradigm into grain like pytorch (pyro) and tensorflow (tf probability).

Conveniently enough, they just incorporated some GPU support in the release in March [1]. Here's an earlier status update [2]. The initial work was focused on cholesky decompositions because that was a big source of slowdown for some types of models. Probably still has a ways to go before reaching tensorflows maturity on the GPU.

[1] https://github.com/stan-dev/math/releases/tag/v2.19.0
[2] https://discourse.mc-stan.org/t/gpu-update-whats-up-and-where-we-are-going/6015
April 26, 2019
On Wednesday, 24 April 2019 at 17:31:03 UTC, jmh530 wrote:
> On Wednesday, 24 April 2019 at 16:33:00 UTC, Shigeki Karita wrote:
>> [snip]
>>
>> I see. I'm interested in Stan that is the best library for probabilistic models but it lacks of GPU computation. Therefore, I plan to add some probabilistic programming paradigm into grain like pytorch (pyro) and tensorflow (tf probability).
>
> Conveniently enough, they just incorporated some GPU support in the release in March [1]. Here's an earlier status update [2]. The initial work was focused on cholesky decomposition because that was a big source of slowdown for some types of models. Probably still has a ways to go before reaching tensorflows maturity on the GPU.
>
> [1] https://github.com/stan-dev/math/releases/tag/v2.19.0
> [2] https://discourse.mc-stan.org/t/gpu-update-whats-up-and-where-we-are-going/6015

I haven't know that GPU support in Stan. That's Cool! Cholesky decomposition always suffers me when I use covariance matrix or something. If you are interested in GPU acceleration in probabilistic programming, see also this paper (Table 2) of Edward (previous name of Tensorflow Probability) https://arxiv.org/pdf/1701.03757.pdf
April 26, 2019
On Friday, 26 April 2019 at 06:35:42 UTC, Shigeki Karita wrote:
> 
>
> I haven't know that GPU support in Stan. That's Cool! Cholesky decomposition always suffers me when I use covariance matrix or something. If you are interested in GPU acceleration in probabilistic programming, see also this paper (Table 2) of Edward (previous name of Tensorflow Probability) https://arxiv.org/pdf/1701.03757.pdf

I think I recall hearing something about Edward. In my experience, Bayesian modelling can be quite finicky...you might do something to get faster results, but then the results may not make sense, particularly as the model becomes more complicated. While I often prefer the Bayesian approach, faster doesn't necessarily mean better.