DGB has complex challenges that need to be solved to improve the quality of life of its customers across the globe. If you are interested in using the disruptive technology of Big Data Tools & Technologies to transform the established industry and build cool products, then “let’s get talking”. The AI development vertical at DGB is looking for technology thought leaders who are highly technical, hands-on and ready to lead from the front.
We are looking for a Big Data Engineer that will work on collecting, storing, processing, and analyzing huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.
- Responsible for Hadoop development.
- Implementation including loading from disparate data sets, preprocessing using Hive and Pig.
- Scope and deliver various Big Data solutions.
- Ability to design solutions independently based on high-level architecture.
- Manage the technical communication between the survey vendor and internal systems.
- Maintain the production systems (Kafka, Hadoop, Cassandra, Elasticsearch)
- Collaborate with other development and research teams.
- Building a cloud-based platform that allows easy development of new applications
- Manage real-time streaming data on Big Data platforms.
- Process unstructured data into a form suitable for analysis and Analyze processed data.
- Monitoring performance and advising any necessary infrastructure changes.
- Defining data retention policies.
Required Skill Set
- MS/BS in Computer Science, Computer Engineering or similar fields.
- 2 – 4 years of recent experience in big data architecture design and engineering.
- Experience with machine learning toolkits including, H2O, SparkML or Mahout
- Experience with Java oriented technologies (JBoss, Spring, SpringMVC, Hibernate, REST/SOAP)
- Experience with non-relational & relational databases (SQL, MySQL, NoSQL, Hadoop, MongoDB, etc.)
- Experience in Map Reduce.
- Experience with Spark, or the Hadoop ecosystem and similar frameworks
- Familiarity with various tools such as AWS, Mesos or Docker and an instinct for automation
- Creative and innovative approach to problem-solving
- Hands-on experience in building prototypes to validate proofs-of-concept and collaborate with business units to
adopt the concept to drive growth and create long term value.
- Deep knowledge of data mining, machine learning, natural language processing, or information retrieval.
- Experience processing large amounts of structured and unstructured data, including integrating data from
Preferred Additional Skills
- Exceptional communications skills – written and verbal
- Publishing record in big data, machine learning, security, software architecture and have strong tie-up/network
- Team player and great collaborator.
- Experience working with an international team spread across geography.
- Competitive salary package.
Send your resume at email@example.com