Part 1 : BucketsBeing paranoid about the future and micro-planning my day has been somewhat of a personality trait for me. To no one’s surprise it’s exhausting and planning every single minute of the day to optimise for a vague definition of success isn’t really helpful.
I might not be the best at coming up with a solution, but I think I have something that I would like to try out for a year.
Breaking it into buckets.If you love doing what you do, no one else get’s to decide if you’re successful – paraphrasing Harry Styles.
This is where I felt the need of trying to define my own idea of what success would be for the year and not borrow it from traditional ideals. There are three major buckets → Personal Goals, Professional Goals and —
- 10 hours for sleeping/eating and other human essentials xD
- That leaves 4380 hours in the year
- Macro Goals → Goals for the year
- Micro Goals → Goals for 2 weeks.
|Personal||930 → (77hours a month)|
|Understand what’s the best way I can communicate and express information||330|
|What’s my idea of recreation||300|
|Have a Healthy Eating/sleeping routine||300|
|Professional||1890 → (157.5 hours a month)|
|Learn/Push something new every week.||630|
|Get an 8+ CGPA in the final semester and have an amazing BEP||630|
|Work towards master or PhD (Publish paper/Research/GRE)||630|
|Others||1560 → (130 hours a month)|
|Read at least 40 books||630|
|Read 50 research papers and completely understand them||630|
|Listen to podcast/Read Articles (Basically, capture new ideas)||300 hours|
What do you think? Any revisions thoughts?
Part 2: Sentence EmbeddingsWe have come across word embeddings. They take words and map them to a vector space and this way we can understand that which word is similar to another. However, there is a lot of information stored in a sentence that is missed when we use a word embedding. Here sentence embeddings are really helpful
from fse.models import Average from fse import IndexedList from gensim.models import FastText file = open("data.txt", "r") data_lines = [ line for line in file ] split_sentences = [line.split() for line in data_lines] sentences = split_sentence ft = FastText(sentences, min_count=1, size=10) model = Average(ft) model.train(IndexedList(sentences)) model.sv.similarity(0,1)