https://bayt.page.link/88337Gaw2XZwoQwJ8
أنشئ تنبيهًا وظيفيًا للوظائف المشابهة

الوصف الوظيفي

About the job

As a Senior Platform Data Engineer​, at Aspire you will play a crucial role in the development, optimization, and management of our data platform. You will be responsible for programming, data orchestration, and ensuring the seamless operation of our Hadoop and Apache Hive environments. Additionally, you'll be at the forefront of Spark application development and contribute to the enhancement of our data processing capabilities.

What you’ll do

  • Utilize expertise in Python and Java for Airflow code development.
  • Apply strong OOP/OOD fundamentals to migrate/update Airflow code.
  • Write unit tests for Airflow and general Python applications.Work within the SDLC framework.
  • Experience with Docker and Kubernetes.
  • Develop complex Directed Acyclic Graphs (DAGs) for Airflow.
  • Manage Airflow Scheduler, Metastore, and Webserver components.
  • Administer Ambari and manage AWS RDS metastore.
  • Execute Hive Query Language (HQL) for data processing.
  • Maintain Hive servers and UDFs.
  • Work with Hadoop Data Platform (HDP), HDFS, and YARN.
  • Conduct unit testing for Spark applications.
  • Optimize Spark parameters for performance.
  • Utilize expertise in Spark v2.4 and v3.2.
  • Work with Apache Livy and Spark SQL.
What you’ll need

  • Bachelor’s Degree in Computer Science or similar.
  • Minimum of 4 years of experience in software testing, including hands-on experience in Web and Mobile testing.
  • Strong expertise in Python v2, v3.7+, and Java, with a focus on Hive UDFs.
  • Demonstrated ability to write unit tests for Airflow and Python applications.
  • In-depth knowledge of Software Development Life Cycle (SDLC) processes.
  • Practical experience with Docker and Kubernetes for containerization and orchestration.
  • Proficiency in complex Directed Acyclic Graph (DAG) development for Airflow v1.8 and 1.10.15.
  • Hands-on experience managing Airflow Scheduler, Metastore (AWS RDS Instance), and Webserver (Flask application).
  • Familiarity with Hadoop and Apache Hive, including Ambari management, Hive Query Language (HQL), and Hive UDFs.
  • Experience with Hadoop Data Platform (HDP), Hadoop Distributed File System (HDFS), and YARN.
  • Proficiency in Spark v2.4 and v3.2, including unit testing, Apache Livy, and Spark SQL.
  • Previous exposure to Apache Zeppelin and Presto/Trino is a plus.
  • Understanding of AWS DevOps fundamentals and Systems Administration.
  • Knowledge of data architecture, including dimensional modeling and taxonomy.
  • Strong analytical and problem-solving skills, coupled with excellent attention to detail and organizational abilities.
  • Effective communication and collaboration skills in a team-oriented environment.
Why Aspire

In addition to a competitive long-term total compensation with salary and performance-based bonus, we have a reward philosophy that expands beyond this. 

Be part of a (Remote is here-to stay) organization 

  • Work and learn from great minds.
  • Explore new opportunities to learn and grow everyday by attending technical and nontechnical training.
  • Get market exposure by working with international tech leaders.
  • Nursery reimbursement benefit.
  • Aspire Wellness Program.
  • Attend virtual and onsite international tech conference.

تفاصيل الوظيفة

منطقة الوظيفة
عمان الأردن
قطاع الشركة
خدمات الدعم التجاري الأخرى
طبيعة عمل الشركة
غير محدد
نوع التوظيف
غير محدد
الراتب الشهري
غير محدد
عدد الوظائف الشاغرة
غير محدد

هل تحتاج لمساعدة في إضافة الكلمات المفتاحية المناسبة لسيرتك الذاتية؟

اطلب مساعدة الخبراء لكتابة سيرة ذاتية مميزة.

لقد تجاوزت الحد الأقصى لعدد التنبيهات الوظيفية المسموح بإضافتها والذي يبلغ 15. يرجى حذف إحدى التنبيهات الوظيفية الحالية لإضافة تنبيه جديد
تم إنشاء تنبيه للوظائف المماثلة بنجاح. يمكنك إدارة التنبيهات عبر الذهاب إلى الإعدادات.
تم إلغاء تفعيل تنبيه الوظائف المماثلة بنجاح. يمكنك إدارة التنبيهات عبر الذهاب إلى الإعدادات.