You will join one of our multi-functional teams focusing on building one of our core features. We understand you are naturally going to have a preference for the frontend or backend, the upcoming work is mostly frontend but you need to open to do some backend work too! We are currently in a transition from a start-up to a scale-up business.
Our tech stack includes React, AWS, microservices, Golang, Native apps and C++ for mobile.
*Requirements:*
You are language agnostic and know that the challenge at hand is the most important
You build it well - write quality, clean code that you are happy to maintain 2 years from now
Must have substantial experience using React.js or another Javascript library
Experience with a backend language, preferably Golang
You're passionate about microservices, platforms and API's that will allow us to ship even more innovative product
You know what it takes to build software that will be used by thousands of customers every hour, with no downtime
You can build the right solution to ship frequently, a solution that our customers trust with even their most critical operations
You can bring a healthy mix of speed, delivery and quality to everything you do
You want to work in a feature-driven environment. You are also someone who is a team player that makes key technology decisions, while at the same time being a mentor
You are destined to make a big impact with a company on the verge of incredible growth (both in users and people)
We have a challenging and exciting opportunity for an experienced Data Engineer to take ownership of building our big data infrastructure. You will have both a passion for building things from the ground up and a vision for recognising the strength this data ecosystem provides to the business both now and as we scale in the future. Our data lake is almost built and we need you to complete it, maintain, update, input data and continue to evolve this. You will work with Scala, Kafka and SQL daily. We are also fully hosted on AWS so you will utilise Redshift, S3 and Athena.
The Role:
Be an integral member on the team responsible for design, implement and maintain distributed big data capable system with high quality components (Kafka, EMR + Spark, Akka, etc). Embrace the challenge of dealing with big data on a daily basis (Kafka, RDS, Redshift, S3, Athena, Hadoop/Hbase), perform data ETL, and build tools for proper data ingestion from multiple data sources. Collaborate closely with data infrastructure engineers and data analysts across different teams, find bottlenecks and solve the problem Design, implement and maintain the heterogeneous data processing platform to automate the execution and management of data-related jobs and pipelines Implement automated data workflow in collaboration with data analysts, continue to improve, maintain and improve system in line with growth Collaborate with Software Engineers on application events, and ensuring right data can be extracted Contribute to resources management for computation and capacity planning Diving deep into code and constantly innovating
*Requirements*
- Experience with AWS data technologies (EC2, EMR, S3, Redshift, ECS, Data Pipeline, etc) and infrastructure.
- Working knowledge in big data frameworks such as Apache Spark, Kafka, Zookeeper, Hadoop, Flink, Storm, etc
- Rich experience with Linux and database systems
- Experience with relational and NoSQL database, query optimization, and data modeling
- Familiar with one or more of the following: Scala/Java, SQL, Python, Shell, Golang, R, etc
- Experience with container technologies (Docker, k8s), Agile development, DevOps and CI tools.
- Excellent problem solving skills
- Excellent verbal and written communication skills