Algoscale Inc. - Big Data Developer/Data Engineer - Hadoop/Spark
4d ago
source :

Responsibilities APIs on Scala Integration with Druid (move current insights API from pythonostgres to API on scalaakka and Druid) For this, build datasets architecture on top of Druid to have preaggregated data for all dashboards.

Kafka is used widely as well. Possibly work with graph db (tigergraph), mongodb, spark. To be able to work as a real team member, be initiative, proactive, responsible, openminded, communicative, non toxic.

We are customer oriented, so a person should be comfortable with a changing of the priorities sometimes.Required Skills Proficiency with Scala language (Akka http, actors, streams) Experience building and optimizing ETLs, data pipelines, architectures and data sets Experience with Druid.

If you have experience with other big data tools like hadoop, hbase, mapreduce, spark, cassandra, dynamoDB, kafka, AWS services, we can cross train.

Experience with implementation of any data warehouse Experience with any Graph DB No sql dbs like Mongdb Any RDBMSEducational Qualification Required BTechMTech preferable in ITcomputer science BackgroundExperience Required 2+ years of work experience

Report this job

Thank you for reporting this job!

Your feedback will help us improve the quality of our services.

My Email
By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
Application form