Daily Blog 17: I am back back back

Part 1 : Buckets

Being paranoid about the future and micro-planning my day has been somewhat of a personality trait for me. To no one’s surprise it’s exhausting and planning every single minute of the day to optimise for a vague definition of success isn’t really helpful. 
I might not be the best at coming up with a solution, but I think I have something that I would like to try out for a year. 

Breaking it into buckets. 

If you love doing what you do, no one else get’s to decide if you’re successful – paraphrasing Harry Styles. 
This is where I felt the need of trying to define my own idea of what success would be for the year and not borrow it from traditional ideals. There are three major buckets  → Personal Goals, Professional Goals and —
Assumption
  • 10 hours for sleeping/eating and other human essentials xD
  • That leaves 4380 hours in the year 
  • Macro Goals → Goals for the year
  • Micro Goals → Goals for 2 weeks. 
TypeHours
Personal 930 → (77hours a month)
Understand what’s the best way I can communicate and express information 330
What’s my idea of recreation300
Have a Healthy Eating/sleeping routine300
Professional 1890 → (157.5 hours a month)
Learn/Push something new every week.630
Get an 8+ CGPA in the final semester and have an amazing BEP630
Work towards master or PhD (Publish paper/Research/GRE)630
Others1560 → (130 hours a month)
Read at least 40 books630
Read 50 research papers and completely understand them630
Listen to podcast/Read Articles (Basically, capture new ideas)300 hours
This is not perfect, this isn’t supposed to be a strict hourly plan. It’s just a guideline—skeleton. So that I have an idea of success. 
What do you think? Any revisions thoughts?
If you want weekly update on the progress:
Powered by Buttondown.

Part 2: Sentence Embeddings

We have come across word embeddings. They take words and map them to a vector space and this way we can understand that which word is similar to another. However, there is a lot of information stored in a sentence that is missed when we use a word embedding. Here sentence embeddings are really helpful

Fast Sentence Embeddings (fse)

Fast Sentence Embeddings is a Python library that serves as an addition to Gensim. This library is intended to compute sentence vectors for large collections of sentences or documents. Usage :
from fse.models import Average
from fse import IndexedList
from gensim.models import FastText

file = open("data.txt", "r")
data_lines = [ line for line in file ]
split_sentences = [line.split() for line in data_lines]

sentences = split_sentence
ft = FastText(sentences, min_count=1, size=10)

model = Average(ft)
model.train(IndexedList(sentences))

model.sv.similarity(0,1)

Leave a Reply

%d bloggers like this: