https://bayt.page.link/88337Gaw2XZwoQwJ8
Create a job alert for similar positions

Job Description

About the job

As a Senior Platform Data Engineer​, at Aspire you will play a crucial role in the development, optimization, and management of our data platform. You will be responsible for programming, data orchestration, and ensuring the seamless operation of our Hadoop and Apache Hive environments. Additionally, you'll be at the forefront of Spark application development and contribute to the enhancement of our data processing capabilities.

What you’ll do

  • Utilize expertise in Python and Java for Airflow code development.
  • Apply strong OOP/OOD fundamentals to migrate/update Airflow code.
  • Write unit tests for Airflow and general Python applications.Work within the SDLC framework.
  • Experience with Docker and Kubernetes.
  • Develop complex Directed Acyclic Graphs (DAGs) for Airflow.
  • Manage Airflow Scheduler, Metastore, and Webserver components.
  • Administer Ambari and manage AWS RDS metastore.
  • Execute Hive Query Language (HQL) for data processing.
  • Maintain Hive servers and UDFs.
  • Work with Hadoop Data Platform (HDP), HDFS, and YARN.
  • Conduct unit testing for Spark applications.
  • Optimize Spark parameters for performance.
  • Utilize expertise in Spark v2.4 and v3.2.
  • Work with Apache Livy and Spark SQL.
What you’ll need

  • Bachelor’s Degree in Computer Science or similar.
  • Minimum of 4 years of experience in software testing, including hands-on experience in Web and Mobile testing.
  • Strong expertise in Python v2, v3.7+, and Java, with a focus on Hive UDFs.
  • Demonstrated ability to write unit tests for Airflow and Python applications.
  • In-depth knowledge of Software Development Life Cycle (SDLC) processes.
  • Practical experience with Docker and Kubernetes for containerization and orchestration.
  • Proficiency in complex Directed Acyclic Graph (DAG) development for Airflow v1.8 and 1.10.15.
  • Hands-on experience managing Airflow Scheduler, Metastore (AWS RDS Instance), and Webserver (Flask application).
  • Familiarity with Hadoop and Apache Hive, including Ambari management, Hive Query Language (HQL), and Hive UDFs.
  • Experience with Hadoop Data Platform (HDP), Hadoop Distributed File System (HDFS), and YARN.
  • Proficiency in Spark v2.4 and v3.2, including unit testing, Apache Livy, and Spark SQL.
  • Previous exposure to Apache Zeppelin and Presto/Trino is a plus.
  • Understanding of AWS DevOps fundamentals and Systems Administration.
  • Knowledge of data architecture, including dimensional modeling and taxonomy.
  • Strong analytical and problem-solving skills, coupled with excellent attention to detail and organizational abilities.
  • Effective communication and collaboration skills in a team-oriented environment.
Why Aspire

In addition to a competitive long-term total compensation with salary and performance-based bonus, we have a reward philosophy that expands beyond this. 

Be part of a (Remote is here-to stay) organization 

  • Work and learn from great minds.
  • Explore new opportunities to learn and grow everyday by attending technical and nontechnical training.
  • Get market exposure by working with international tech leaders.
  • Nursery reimbursement benefit.
  • Aspire Wellness Program.
  • Attend virtual and onsite international tech conference.

Job Details

Job Location
Amman Jordan
Company Industry
Other Business Support Services
Company Type
Unspecified
Employment Type
Unspecified
Monthly Salary Range
Unspecified
Number of Vacancies
Unspecified

Do you need help in adding the right mix of strong keywords to your CV?

Let our experts design a Professional CV for you.

You have reached your limit of 15 Job Alerts. To create a new Job Alert, delete one of your existing Job Alerts first.
Similar jobs alert created successfully. You can manage alerts in settings.
Similar jobs alert disabled successfully. You can manage alerts in settings.