-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggestion: More granular approach #12
Comments
@guidomb I forgot to say, if you like the idea I'm happy to help with the implementation 😄 |
I like the idea. For the gotcha you mentioned the easiest solution I can think of is keeping the current behavior (uploading a zip for the whole This has the problem of uploading more files. The other option would be just to upload the The third option would be to upload every dependency cache individualy and create a small web services that given a What do you think? |
Hey @guidomb, I'm glad you like the idea 😄 I like the idea of doing the minimum amount of work on the client, and leveraging a lamba function to do more advance stuff on the bucket. Having said that I think that a good first step would be to implement your first suggestion: uploading both single dependencies and build folder. I think we could put it in place relatively quickly, and we'd have a working version that we could then improve with the lamba. We would need to put some logic in place, i.e unzip of single dependencies, for that to work anyway. How does it sound? I'll start working on some related tasks straightaway. |
If you want to start working on this I will be more than happy. I would like to have a solid code coverage to make sure that we don't brake anything (gonna try to work on this on the weekend). If you want to go for the client based approach I think that the user should be able to opt-out of this feature to avoid the extra transfer. Maybe the best approach would be to turn it off by default and add command flag and/or an attribute in I'm gonna do some research on the AWS approach because I wanted to use lambda function in a real case scenario and haven't got the chance yet. I think we could create a folder for each user / organization slug and then a zip file with the name of the dependency and the commit or tag as it appear in the
then in the S3 bucket we would have the following folder structure
And the approach could be first try to find a zip file for all the dependencies as is working now and if there is no archive then try to fetch each dependency individually. Finally it there still are missing dependency we could either execute What do you think? |
Good stuff!
I'm a big TDD fan myself, wouldn't have it any other way 😃
I think it makes sense. For example +1 on all the rest, only two things:
FYI I thought I'd been able to start working on this soon, but it might not be till the middle of next week. |
I though having the folders would make it easier if you wanted to navigate using the AWS UI but whatever you think is best or makes it easier (as far as I'm concerned creating folders in S3 is just prepending the name of the folder when you set the name of the file.) Yeah probably for |
I've just added some test to get us covered -> https://codeclimate.com/github/guidomb/carthage_cache/coverage |
Fantastic ✨ I should be able to start working on this before the end of the week. Sorry about the delay 😳 |
No need to be sorry. Thanks for contributing! |
So I've been spending a bit of time onto this, I have some code I could use to open a PR, but... I bumped into an issue which I think might compromise the whole granular cache idea. Carthage doesn't provide a biderectional way to go from built framework name to For example give this:
Running
I don't see any robust way to go from the Cartfile entry to the build frameworks, as such I don't know how much value the granular cache would actually be. On top of that in the few tests I've made the download speed of the whole cache versus the single dependencies archives is always comparable. My take is that given that a How do you feel about it? I feel quite dump for not realizing this earlier... On the other hand, I've run with |
I think you have a good point and any attempt to implement such feature in a robust way will probably make carthage cache way more complex. We can close this for now and in case anyone is interested in this we can re-open it. |
I am interested! :) Well I've just read through all your comments and got quite excited until the last message from @mokagio where he says it's not adding much of real world use. But it does for us: We frequently run into a situation where a project has apps for different platforms (say iOS, tvOS, macOS) and we want to share as much logic as possible between them. So our apps for the different platforms merely serve as the UI for the framework that holds most of the logic which we integrate using Carthage. Now when we need to make a change in the framework for a new feature on the iOS app, we need to update the Framework project and rebuild it using Carthage. The shared framework of course has its own dependencies (also included via Carthage) and those are installed each time we make a change in the shared framework and want to update all apps (iOS, tvOS and macOS). So we finally end up in a mess like this:
Now when we make a change in the shared framework, we need to update the dependencies of each platform. Although we are using the I hope I could explain the situation where this is needed. As for the rest, I have one comment to make:
Well, Carthage happens to make that connection somehow, doesn't it? And the good thing about it is that Carthage is open source, so we can have a look into the code. I think, basically, Carthage makes a web request to the Git project and has a look into the |
@Dschee We do also have shared internal libraries but we haven't had this issue because those internal shared libraries don't change that often. In one particular project that required to frequently add changes to the shared library we ended up using the master branch and using git submodules for all dependencies in that project. That solved the issue of having to build all the dependencies every time we added a change to the shared library. This might or might not suit your needs. I created this tool to solve the paint points of our particular development flow. Currently I only add features to this project that are small and don't add to much complexity to the tool or if they do they add significant value to all users. That being said I'd be more than happy to consider adding such feature if you manage to implement it in a robust way. Personally I don't have the time nor the need to implement such feature but I have no problem to give you feedback and reviewing a PR. |
Hey @guidomb, first of all nice gem! It was pretty easy to set it up and it worked straightaway. Great job.
I have a suggestion, on how the tool could behave to perform more efficiently when used across multiple projects.
What do you think of the idea of zipping and uploading the dependencies one by one, rather than the whole
Carthage/Build
folder?Here's a scenario in which I think it would be useful:
Assuming that we were to setup the projects in order, 1 then 2 then 3, this is what would happen:
Each of the three projects had to perform
carthage bootstrap
and upload its cache.If we were instead to have a cached version of each build dependency, this is what would happen:
In this scenario only Project 1 had to go through the whole
carthage bootstrap
process, and Project 3 was simply able to download cached builds.This means that "at scale" each project using
carthage_cache
would tap into the cache of the othersThere is a gotcha, give the current behaviour of carthage if one of the dependencies is not cached
carthage bootstrap
would have to be run anyway. I'm confident we can find a way to work around this issue, or maybe close an eye for the moment.What's your take?
Cheers 🍻
The text was updated successfully, but these errors were encountered: