How adversarial training is NOT done!
Image courtesy of Ociacia
Let’s try to formalize our finding a bit. Can we make it all work?
Can we generate a schedule (rank a set of tasks) so that it maximizes the chances of achieving a set of predefined goals?
We have three types of tasks:
A task has the following properties:
Essentially, we are going to use Adversarial Training to oppose the opinions of our minions.
The importance I of a task is defined as I=r2∗d where r is a priority as defined by the user and d is the predicted duration of the task. Note that r can also be inherited from the priority of the higher level goal, if t is associated with such.
Here is the part of the pessimist. He tries to ruin our beautifully crafted schedule. Since how much of the schedule will be completed is highly user-specific, we’re going to train a model that predicts how much of a given schedule will be completed using user’s historical data. Furthermore, it would be useful to have uncertainty associated with each prediction. We are not required to provide a schedule if we’re highly uncertain about its completion. Thus, we might not produce predictions during initial learning period for a user.
Considering we want our schedule to become more or less challenging every week, the model should obtain its posterior distribution conditioning on the performance from the previous week. Consequently, we should define some value by which to increase or decrease the initial threshold.
You'll never get spam from me
This book brings the fundamentals of Machine Learning to you, using tools and techniques used to solve real-world problems in Computer Vision, Natural Language Processing, and Time Series analysis. The skills taught in this book will lay the foundation for you to advance your journey to Machine Learning Mastery!