|
Here is a pragmatic use case I came across at work during model development. I have a model that feeds into another model that feeds into... So on. The steps themselves can be run in an automated fashion but some of the results need to be QA'd by the data science guys to let the next step go forth. Another need is to be able to publish from a model some artifacts (figures, numbers etc) What we need is a framework where a human can observe this output (READ Only) and then after QA mark a OK flag (The only Write step in the process) that may allow the next step to go forth. I realize that this framework itself may not require to be solid for numerical/scientific computation, as it can be quite decoupled from such requirements and it would be invaluable for a use case like ours. 1) Is there a framework/application that exists that would make this easier to implement? 2) Is someone using such things? |