Atrenta On-Campus interview experience

I am writing this post to help other people aspiring for a job at Atrenta. At this point I am typing in a little hurry, so please expect and ignore grammatical mistakes.

Written Round:
Time : 1 hour
3 sections- CSE based + ECE based + general puzzle
Priority: 1st: CSE, 2nd: Puzzle, 3rd: ECE
1. CSE based: Questions based on C/C++. "C programming concepts" by Jitender Chabra Sir should suffice. Around 12-13 problems were there in this section. Problems were like "find the output", "complete the code", "write the code" etc.
2. Puzzle section: General puzzles. If you know the answer, you can ace otherwise there is little time to think. Around 10 questions.
3. ECE section: Around 10 questions. Based on simple ece concepts like flip-flops, boolean expressions, decoder etc.

I attempted all problems from CSE section (in around 30 minutes), 6-7 questions from puzzle section (around 15-20 minutes), and around 5 questions from ece part (last 10-15 minutes).

-> Make sure you are comfortable with pointers.

1st round:
* Asked two puzzles.
* Asked me to write a vhdl program as I had mentioned it in my resume.
* Asked me to write a class for library management system (complete class, yes, destructor too).
* Asked me the minimum spanning tree algorithms, indirectly.
* 1-2 other stack-queue problems that I don't remember.

2nd round:
* Long discussion on finding connected components in a graph forest, along with full code and complexity analysis.
* Another discussion on hash tables.

3rd round:
The interviewer asked me about my academic projects. Few simple problems on data structures, like finding loop in a linked list.

Later after all the interviews, they announced the result and I was selected.

For any clarification, please either comment, or contact me at or


Popular posts from this blog

C Graph implementation with adjacency list representation using structures: Data Structure Tutorial 1

Interview with ACRush (Tiancheng Lou), the coding sensation!

Importing data to dynamoDB from S3 (using AWS Data Pipeline)