2016-09-09: General discussion
Time: 14:00 UTC
Hangout link: https://hangouts.google.com/call/v5olhwzpfzgzpoq5i3wthjpqpie
Attendees
- Jonathan Helmus, Filipe, Michael, Ray, Eric Dill, Björn Grüning, Matt Craig (late)
Standing items
- How many repos? ~1100
- How many contributors? ~220
- New core devs?
Notes
Bioconda updates:
- Rebuilding binaries for the conda-build 2.0 when the source tarballs that disappear. Bioconda is arching the sources.
- Automate process to archive source tarball and test in a container (nice as a service to create a bundle-container to run packages).
Core Devs
- Eric Dill (invite)
- Peter M. Landwehr (already invited)
Split builds
- conda-build issue (xref?) conda/conda build#1338
- continuum compiler toolchain to use gcc (Linux), clang and gfortran (OS X) consistently.
Pre-releases/RC
-
Needs a champion to write a proposal!
* Eric Dill will take this on. Hopefully a CFEP will land within one week, 2016-09-16
-
Eric suggests having both dev and
-dev labels. The former is for "cutting edge people" but the latter is people who only want to be testing/using the new version of only one thing (plus any dependencies). -
dev is a bad name. These packages are more for testing than for development. Testing? RC?
-
Filipe thinks we should not accept versions earlier than RC (Not really do not accept but encourage people to call their dev version a RC. The thinking is that conda-forge is a place to release binaries and nightly testing builds, for example, are beyond the scope IMO.)
The feather-feedstock maintainers question:
- They want to build Python 2.7 with a modern Visual Studio and conda-forge should suggest to them that this will create a different ecosystem that is compatible with conda-forge.
conda-build 2.0 and conda-build-all. Mike asked if we are ready to use conda-build 2.0. conda-forge needs to check:
- where are the pins to conda-build <2.0
- check conda-inspect
- check the upload script
- check conda-smithy
Use pip in the build script.
- On Windows need conda > 4.2
- Need to check if the entry_points must be declared or not in the recipe.
Agenda
-
Next meeting: can we do 2016-09-16?
-
Update from the bioconda community. Tarball archiving and automatic Container (Docker, rkt) builds.
-
Archives: bioconda/bioconda recipes#2194
-
Container: bioconda/bioconda recipes#2297
-
Is conda-forge interested in a similar integration?
-
OSX - getting back to a usable, coherent, stack
-
libc++ (clang) vs libstdc++ (gcc/g++)
-
Apple's Blocks extension to C (these are like lambdas) isn't in recent (or non-Apple) GCC: https://gcc.gnu.org/ml/gcc/2009-09/msg00264.html
-
Can we link gfortran and LLVM system/c++ libraries together w/o violating GPL w/runtime exception (compiler_rt + libc++) - not if link is done statically to the best of my knowledge, and also can gfortran be built on top of compiler_rt? These are big unknowns.
-
Minimum OSX required for clang (10.8, I think?)
-
Actually clang is usable beginning in 10.7. So, this would be viable given your compatibility constraints.
-
Also, all the refs I have seen suggest that this will still have C++11 support.
-
Compatibility with defaults (built on 10.7, uses gcc) - where will people break? I think only if mixing packages - how do we assure that we have all the ones we need?
-
Metadata unification with Continuum - are we OK with adding some fields to about section to match Anaconda standard?
-
example at https://github.com/ContinuumIO/anaconda-recipes/blob/master/colander/meta.yaml
-
license_family
-
doc_url
-
dev_url
-
constrain summary to 80 chars (longer stuff use description)
-
Can we add this to linter, and add to recipes as we update them?
-
What support for unicode should we have? Any? Summary/description only?
-
CUDA/cuDNN update
-
Improving infrastructure
* Better workflows with staged-recipes
* Fast finish AppVeyor on merge ( [conda forge/staged recipes#1142](https://github.com/conda-forge/staged-recipes/pull/1142) )
* Drop Travis CI matrix ( [conda forge/staged recipes#1234](https://github.com/conda-forge/staged-recipes/pull/1234) )
* Use CircleCI for feedstock generation ( [conda forge/staged recipes#916](https://github.com/conda-forge/staged-recipes/issues/916) )
* Keeping recipes out of PRs ( [conda forge/staged recipes#942](https://github.com/conda-forge/staged-recipes/issues/942) )
* Bank work in partial conversion ( [conda forge/staged recipes#915](https://github.com/conda-forge/staged-recipes/issues/915) ) -
Notifications (how do we stay on top of them)
-
MSYS2
* Available on defaults - was in conda 4.1.7, but that was pulled. Coming in 4.1.8.
- Discussing Ray Donnelly's work on MSYS2 packages and how we want to use and integrate these into conda-forge.
- Some use cases to consider OpenBLAS, FFTW, build tools, others?
-
Binary data
* Do we include it in recipes?
- What kinds do we allow if any (e.g. icons)?
- How do we verify the licensing?
- How do we verify that they are safe?
-
Dev releases: Where do they happen?
* Do we do them at conda-forge?
* Maybe add a label.
* Do we let others do them with a feedstock on their own repo?- How do we enforce whatever we decide?
-
Channel mirroring
* Can this point be a little bit explained? I thought about this as well and would like to contribute to this point.
* Eric Dill has put together a script for copying a package from one channel to another here: [conda forge/conda forge.github.io#134](https://github.com/conda-forge/conda-forge.github.io/pull/134)
* I have a really, really crude script that copies all of the packages in one channel to another that I just put at: [](https://gist.github.com/mwcraig/8473cf840f6d29236d6d8af699404555)[https://gist.github.com/mwcraig/8473cf840f6d29236d6d8af699404555](https://gist.github.com/mwcraig/8473cf840f6d29236d6d8af699404555)
* conda-build-all can copy from one channel to another: `conda build-all --inspect-channels conda-forge --upload-channels astropy some_packge_recipe` will copy the `some_package` from the channel conda-forge to astropy if it can, or build it if it doesn't exist on conda-forge. Discussion about what the desired behavior should be has started at: [SciTools/conda build all#46](https://github.com/SciTools/conda-build-all/issues/46) -
Feedstock history
* Is it sacred?
-
Do we rebase/force push?
* If so, under what conditions?
-
How do we avoid multiple people doing this simultaneously?
* I don't think you can.
* IMHO, if it's just one author in staged recipes, sure. If feedstock, no force push - only to PRs to feedstock. If people don't mind merge PRs, it sure is a lot simpler to not rebase. I have messed up rebasing a few times recently... =(
-
-
-
Drop numpy 1.10 and reduce our build matrix. (Numba now works with numpy 1.11.)
-
Feedstocks philosophy: Explicit vs implicit / reproducible vs redundant
-
Signing packages
* Should be easy to do. ( [](http://conda.pydata.org/docs/signed-packages.html)[http://conda.pydata.org/docs/signed-packages.html](http://conda.pydata.org/docs/signed-packages.html) )
- There has been some interest previously.
-
HTTPError: 503 Server Error: Service Unavailable: Back-end server is at capacity for url...
* Seems we are regularly running into this issue under normal usage conditions.
- Had discussed previously caching packages on AppVeyor and trying to reuse those to start.
- Maybe we need to consider caching on all CIs.
- Building our own Miniconda-like self-extracting scripts with packages via
constructor
. - There have been improvements on Continuum's side that should help this. In short, repodata (the package index for a given channel) was being generated for each anaconda.org query. This was unnecessarily high cost, and some caching schemes have been implemented.
-
Handling removal of unpinned/improperly pinned packages.
* Has been done manually thus far.
- This doesn't scale well though.
- Should we (semi) automate removal?
- Should we hot-fix broken packages? ( conda forge/conda forge.github.io#170 )
- Should we label them as broken
-
Not currently buildable packages
* In particular open source code that is out of scope for CIs.
-
Examples include Qt4, Qt5, possibly PyQt4, possibly PyQt5, gcc, VTK, etc.
-
How do we indicate they are built manually?
-
Are we ok with uploading non-built binaries?
-
When do we determine something is ok to be built manually?
-
What procedures should people follow for building manually?
* Use a standard build docker image, VM, or vagrant file
-
Sign package?
-
Implement reproducible builds where feasible (linux)
* [](https://reproducible-builds.org/)[https://reproducible-builds.org/](https://reproducible-builds.org/)
-
What changes do we need to make in conda-smithy elsewhere?
-
-
What other build infrastructure could we utilize?
* Would be nice to provide some volunteer builder abstraction, so that we could have an elastic worker farm that would be somewhat resilient.
- Standardizing build images is probably (relatively) easy - how to orchestrate, though?
-
-
Windows BLAS Solutions
* Still don't have a BLAS for Windows yet need something.
-
Don't build a BLAS
* NumPy has a small subset of BLAS functionality.
-
Not sure what to do with SciPy (unable to find Windows wheels for them either).
-
Build OpenBLAS with C support only.
* Will be pretty slow.
-
Should work on all Pythons.
-
Build OpenBLAS with MinGW compilers.
* Works with Python 2.7 and 3.4.
-
Won't work with Python 3.5?
-
Reuse something like R's BLAS.
* Is there a package for something like this?
-
Will it have the same issues with Python 3.5?
-
ATLAS?
-
-