Cognizantโ€™s Vacancies for Data Engineer โ€“ AWS PySpark Developer | 3-9+ Experience Apply Now!

Cognizantโ€™s Vacancies for Data Engineer

Are you a data enthusiast with a knack for AWS and PySpark development? Cognizant, a global leader in technology consulting and digital solutions, has fantastic news for professionals like you! They currently have multiple openings for the position of Data Engineer โ€“ AWS PySpark Developer, welcoming applicants with 3 to 9+ years of experience in this domain.

TittleData Engineer โ€“ AWS PySpark Developer
Experience3 to 9 years
LocationGurgaon/Noida, Indiaย 
Job summary

Why Choose Cognizant?

  1. Cutting-Edge Projects: Cognizant is at the forefront of digital transformation, working on projects that leverage advanced technologies like AWS, PySpark, and more. As a Data Engineer, youโ€™ll have the opportunity to contribute to innovative solutions that drive business outcomes.
  2. Continuous Learning: Cognizant values skill development and offers a range of learning opportunities, including training programs, certifications, and hands-on experiences with emerging technologies.
  3. Global Exposure: With a global footprint and a diverse client base, Cognizant provides exposure to a wide range of industries and geographies, enriching your professional experience.
  4. Collaborative Culture: At Cognizant, collaboration is key. Youโ€™ll work in teams comprising talented individuals from diverse backgrounds, fostering creativity, teamwork, and mutual growth.
Cognizant's Vacancies for Data Engineer

Skill requirements:

  • Minimum 3 โ€“ 9 years of relevant experience in Data Engineer role.
  • Excellent hands-on experience in AWS Data technologies Glue, S3, Athena, EMR, IAM etc.
  • Mandatory โ€“ Hands on experience in Python and PySpark. Python as a language is practically usable for anything, we are looking for application Development and Extract/Transform/Load and Datalake curation experience using Python.
  • Hands on experience in version control tools like Git.
  • Worked on Amazonโ€™s Analytics services like Amazon Athena, DynamoDB and AWS Glue
  • Worked on Amazonโ€™s Compute services like Amazon Lambda, Amazon EC2 and Amazonโ€™s Storage service like S3 and few other services.
  • Experience/knowledge of bash/shell scripting will be a plus.
  • Has built ETL processes to take data, copy it, structurally transform it etc. involving a wide variety of formats like CSV, fixed width, XML and JSON.
  • Have worked with columnar storage formats- Parquet, Avro, and ORC etc.
  • Hands on experience in tools like Jenkins to build, test and deploy the applications.
  • Excellent debugging skills.
  • Ability to quickly perform critical analysis and use creative approaches for solving complex problems.
  • Strong academic background.
  • Excellent written and verbal communication skills, and strong relationship building skills.ย 

Duties and Responsibilities

  • Maintain knowledge of evolving industry trends, practices, techniques, and standards on Big Data with focus on AWS cloud technologies and Python/Pyspark from application development perspective.
  • Be aware of DevOps practices and follow appropriate steps/tools to effect code migration during development phase.
  • Collaborate with cross-commit teams for implementation of deliverables in all environments.
  • Development of code and design to meet evolving needs while adhering to policy and standards.
  • Write test cases for Unit testing and Functional testing when the application is developed.
  • Conduct and participate in coding and design reviews.
  • Manage test plan & risks through development phases to implementation, ensure zero defects are introduced into the production environment.
  • Ensure required documentation for projects and/or enhancements are created and updated.
  • Monitor and support the production environment applications to ensure optimum stability and performance prior to handing over the code to operations team.
  • Be aware of data-warehousing concepts and different facts/dimension-based modelling mechanisms for implementing in a Datalake environment.

How to Apply:

If youโ€™re ready to take on exciting challenges in data engineering and AWS PySpark development, donโ€™t miss this opportunity to join Cognizantโ€™s dynamic team! You can apply directly through Cognizantโ€™s Careers portal using the following link: Cognizant Careers โ€“ Data Engineer โ€“ AWS PySpark Developer Vacancy.

Seize the opportunity to work on impactful projects, enhance your skills, and be part of a global leader in technology and consulting. Apply now and embark on a rewarding journey with Cognizant!

Cognizantโ€™s Vacancies for Data Engineer Cognizantโ€™s Vacancies for Data Engineer Cognizantโ€™s Vacancies for Data Engineer

Cognizantโ€™s Vacancies for Data Engineer

Cognizantโ€™s Vacancies for Data Engineer

For more job search visit careerflock offical page

Leave a Reply

Your email address will not be published. Required fields are marked *