Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Key Responsibilities Design, develop, and optimize data pipelines using Python and AWS services such asGlue, Lambda, S3, EMR, Redshift, Athena, and Kinesis. Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses). Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Monitor, troubleshoot, and enhance data workflows for performance and cost optimization. Ensure data quality and consistency by implementing validation and governance practices. Work on data security best practices in compliance with organizational policies and regulations. Automate repetitive data engineering tasks using Python scripts and frameworks. Leverage CI/CD pipelines for deployment of data workflows on AWS. Required Skills and Qualifications Professional Experience:5+ years of experiencein data engineering or a related field. Programming: Strong proficiency inPython, with experience in libraries likepandas,pySpark,orboto3. AWS Expertise: Hands-on experience with core AWS services for data engineering, such as: AWS Gluefor ETL/ELT. S3for storage. RedshiftorAthenafor data warehousing and querying. Lambdafor serverless compute. KinesisorSNS/SQSfor data streaming. IAM Rolesfor security. Databases: Proficiency in SQL and experience withrelational(e.g., PostgreSQL, MySQL) andNoSQL(e.g., DynamoDB) databases. Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus. DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline. Version Control: Proficient with Git-based workflows. Problem Solving: Excellent analytical and debugging skills. Optional Skills Knowledge ofdata modelinganddata warehouse designprinciples. Experience withdata visualization tools(e.g., Tableau, Power BI). Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes). Exposure to other programming languages like Scala or Java.
Posted 3 weeks ago
6.0 - 11.0 years
4 - 8 Lacs
Kolkata
Work from Office
Must have knowledge in Azure Datalake, Azure function, Azure Databricks, Azure data factory, PostgreSQL Working knowledge in Azure devops, Git flow would be an added advantage. (OR) SET 2 Must have working knowledge in AWS Kinesis, AWS EMR, AWS Glue, AWS RDS, AWS Athena, AWS RedShift. Should have demonstrable knowledge and expertise in working with timeseries data. Working knowledge in delivering data engineering / data science projects in Industry 4.0 is an added advantage. Should have knowledge on Palantir. Strong problem-solving skills with an emphasis on sustainable and reusable development. Experience using statistical computer languages to manipulate data and draw insights from large data sets Python/PySpark, Pandas, Numpy seaborn / matplotlib, Knowledge in Streamlit.io is a plus Familiarity with Scala, GoLang, Java would be added advantage. Experience with big data toolsHadoop, Spark, Kafka, etc. Experience with relational databases such as Microsoft SQL Server, MySQL, PostGreSQL, Oracle and NoSQL databases such as Hadoop, Cassandra, Mongo dB Experience with data pipeline and workflow management toolsAzkaban, Luigi, Airflow, etc Experience building and optimizing big data data pipelines, architectures and data sets. Strong analytic skills related to working with unstructured datasets. Primary Skills Provide innovative solutions to the data engineering problems that are faced in the project and solve them with technically superior code & skills. Where possible, should document the process of choosing technology or usage of integration patterns and help in creating a knowledge management artefact that can be used for other similar areas. Create & apply best practices in delivering the project with clean code. Should work innovatively and have a sense of proactiveness in fulfilling the project needs. Additional Information: Reporting to Director- Intelligent Insights and Data Strategy Travel Must be willing to be deployed at client locations anywhere in the world for long and short term as well as should be flexible to travel on shorter duration within India and abroad
Posted 3 weeks ago
3.0 - 6.0 years
5 - 9 Lacs
Pune
Work from Office
Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. - Grade Specific The role support the team in building and maintaining data infrastructure and systems within an organization. Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management
Posted 3 weeks ago
7.0 - 12.0 years
10 - 20 Lacs
Hyderabad
Remote
Job Title: Senior Data Engineer Location: Remote Job Type: Fulltime Experience Level: 7+ years About the Role: We are seeking a highly skilled Senior Data Engineer to join our team in building a modern data platform on AWS. You will play a key role in transitioning from legacy systems to a scalable, cloud-native architecture using technologies like Apache Iceberg, AWS Glue, Redshift, and Atlan for governance. This role requires hands-on experience across both legacy (e.g., Siebel, Talend, Informatica) and modern data stacks. Responsibilities: Design, develop, and optimize data pipelines and ETL/ELT workflows on AWS. Migrate legacy data solutions (Siebel, Talend, Informatica) to modern AWS-native services. Implement and manage a data lake architecture using Apache Iceberg and AWS Glue. Work with Redshift for data warehousing solutions including performance tuning and modelling. Apply data quality and observability practices using Soda or similar tools. Ensure data governance and metadata management using Atlan (or other tools like Collibra, Alation). Collaborate with data architects, analysts, and business stakeholders to deliver robust data solutions. Build scalable, secure, and high-performing data platforms supporting both batch and real-time use cases. Participate in defining and enforcing data engineering best practices. Required Qualifications: 7+ years of experience in data engineering and data pipeline development. Strong expertise with AWS services, especially Redshift, Glue, S3, and Athena. Proven experience with Apache Iceberg or similar open table formats (like Delta Lake or Hudi). Experience with legacy tools like Siebel, Talend, and Informatica. Knowledge of data governance tools like Atlan, Collibra, or Alation. Experience implementing data quality checks using Soda or equivalent. Strong SQL and Python skills; familiarity with Spark is a plus. Solid understanding of data modeling, data warehousing, and big data architectures. Strong problem-solving skills and the ability to work in an Agile environment.
Posted 3 weeks ago
9.0 - 14.0 years
20 - 30 Lacs
Kochi, Bengaluru
Work from Office
Senior Data Engineer AWS (Glue, Data Warehousing, Optimization & Security) Experienced Senior Data Engineer (6+ Yrs) with deep expertise in AWS cloud Data services, particularly AWS Glue, to design, build, and optimize scalable data solutions. The ideal candidate will drive end-to-end data engineering initiatives — from ingestion to consumption — with a strong focus on data warehousing, performance optimization, self-service enablement, and data security. The candidate needs to have experience in doing consulting and troubleshooting exercise to design best-fit solutions. Key Responsibilities Consult with business and technology stakeholders to understand data requirements, troubleshoot and advise on best-fit AWS data solutions Design and implement scalable ETL pipelines using AWS Glue, handling structured and semi-structured data Architect and manage modern cloud data warehouses (e.g., Amazon Redshift, Snowflake, or equivalent) Optimize data pipelines and queries for performance, cost-efficiency, and scalability Develop solutions that enable self-service analytics for business and data science teams Implement data security, governance, and access controls Collaborate with data scientists, analysts, and business stakeholders to understand data needs Monitor, troubleshoot, and improve existing data solutions, ensuring high availability and reliability Required Skills & Experience 8+ years of experience in data engineering in AWS platform Strong hands-on experience with AWS Glue, Lambda, S3, Athena, Redshift, IAM Proven expertise in data modelling, data warehousing concepts, and SQL optimization Experience designing self-service data platforms for business users Solid understanding of data security, encryption, and access management Proficiency in Python Familiarity with DevOps practices & CI/CD Strong problem-solving Exposure to BI tools (e.g., QuickSight, Power BI, Tableau) for self-service enablement Preferred Qualifications AWS Certified Data Analytics – Specialty or Solutions Architect – Associate
Posted 3 weeks ago
3.0 - 5.0 years
12 - 14 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities Key Responsibilities: Design, develop, and maintain data pipelines and ETL workflows on AWS platform Work with AWS services like S3, Glue, Lambda, Redshift, EMR, and Athena for data ingestion, transformation, and analytics Collaborate with Data Scientists, Analysts, and Business teams to understand data requirements Optimize data workflows for performance, scalability, and reliability Troubleshoot data issues, monitor jobs, and ensure data quality and integrity Write efficient SQL queries and automate data processing tasks Implement data security and compliance best practices Maintain technical documentation and data pipeline monitoring dashboards Required Skills: 3 to 5 years of hands-on experience as a Data Engineer on AWS Cloud Strong expertise with AWS data services: S3, Glue, Redshift, Athena, EMR, Lambda Proficient in SQL , Python, or Scala for data processing and scripting Experience with ETL tools and frameworks on AWS Understanding of data warehousing concepts and architecture Familiarity with CI/CD for data pipelines is a plus Strong problem-solving and communication skills Ability to work in Agile environment and handle multiple priorities Preferred candidate profile
Posted 3 weeks ago
3.0 - 6.0 years
13 - 19 Lacs
Hyderabad
Hybrid
Primary Responsibilities: Data Collection and Cleaning: Data Analysts are responsible for gathering data from multiple sources, ensuring its accuracy and completeness. This involves cleaning and preprocessing data to remove inaccuracies, duplicates, and irrelevant information. Proficiency in data manipulation tools such as SQL, Excel, and Python is essential for efficiently handling large data sets. Analysis and Interpretation : One of the primary tasks of a Data Analyst is to analyse data to uncover trends, patterns, and correlations. They use statistical techniques and software such as R, SAS, and Tableau to conduct detailed analyses. The ability to interpret results and communicate findings clearly is crucial for guiding business decisions. Reporting and Visualization: Data Analysts create comprehensive reports and visualizations to present data insights to stakeholders. These visualizations, often created using tools like Power BI and Tableau, make complex data more understandable and actionable. Analysts must be skilled in designing charts, graphs, and dashboards that effectively convey key information. Collaboration and Communication: Effective collaboration with other departments, such as marketing, finance, and IT, is vital for understanding data needs and ensuring that analysis aligns with organizational goals. Data Analysts must communicate their findings clearly and concisely, often translating technical data into understandable insights for non-technical stakeholders. Predictive Modelling and Forecasting: Advanced Data Analysts also engage in predictive modelling and forecasting, using machine learning algorithms and statistical methods to predict future trends and outcomes. This requires a solid understanding of data science principles and familiarity with tools like TensorFlow and Scikit-learn Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: B.Tech or Masters degree or equivalent degree 6+ years of experience in Data Analyst role in Data Warehouse 3+ years of experience with a focus on building models for analytics and insights in AWS environments Experience with Data Visualization: Ability to create effective visualizations using tools like Tableau, Power BI, AWS Quick Sight and other visualization software Proficiency in Analytical Tools: Solid knowledge of SQL, Excel, Python, R, and other data manipulation and statistical analysis tools Knowledge of Database Management: Understanding of database structures, schemas, and data management practices Programming Skills: Familiarity with programming languages such as Python and R for data analysis and modelling Statistical Analysis: Solid grasp of statistical methods, hypothesis testing, and experimental design Preferred Qualifications: Experience of Terraform to define and manage Infrastructure as Code(IaC) Data Engineering: Working on data architecture, database design, and data warehousing
Posted 3 weeks ago
6.0 - 11.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Interested candidates can share their updated CV at: heena.ruchwani@gspann.com Join GSPANN Technologies as a Senior AWS Data Engineer and play a critical role in designing, building, and optimizing scalable data pipelines in the cloud. Were looking for an experienced engineer who can turn complex data into actionable insights using the AWS ecosystem. Key Responsibilities: Design, develop, and maintain scalable data pipelines on AWS. Work with large datasets to perform ETL/ELT transformations using tools like AWS Glue, EMR, and Lambda . Optimize and monitor data workflows , ensuring reliability and performance. Collaborate with data analysts, architects, and other engineers to build data solutions that support business needs. Implement and manage data lakes , data warehouses , and streaming architectures . Ensure data quality, governance, and security standards are met across platforms. Participate in code reviews , documentation, and mentoring of junior data engineers. Required Skills & Qualifications: 5+ years of experience in data engineering , with strong hands-on work in the AWS cloud ecosystem . Proficiency in Python , PySpark , and SQL . Strong experience with AWS services : AWS Glue , Lambda , EMR , S3 , Athena , Redshift , Kinesis , etc. Expertise in data pipeline development and workflow orchestration (e.g., Airflow , Step Functions ). Solid understanding of data warehousing and data lake architecture. Experience with CI/CD , version control (GitHub) , and DevOps practices for data environments. Familiarity with Snowflake , Databricks , or Looker is a plus. Excellent communication and problem-solving skills. Interested candidates can share their updated CV at: heena.ruchwani@gspann.com
Posted 3 weeks ago
3.0 - 5.0 years
10 - 15 Lacs
Pune
Work from Office
About the Role: Data Engineer Core Responsibilities: The candidate is expected to lead one of the key analytics areas end-to-end. This is a pure hands-on role. Ensure the solutions built meet the required best practices and coding standards. Ability to adapt to any new technology if the situation demands. Requirement gathering with the business and getting this prioritized in the sprint cycle. Should be able to take end-to-end responsibility of the assigned task Ensure quality and timely delivery. Preference and Experience- Strong at PySpark, Python, and Java fundamentals Good understanding of Data Structure Good at SQL queries/optimization Strong fundamentals of OOP programming Good understanding of AWS Cloud, Big Data. Nice to have Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB Academic qualifications- Must be a Technical Graduate B. Tech / M. Tech – Tier 1/2 colleges.
Posted 3 weeks ago
5.0 - 10.0 years
1 - 5 Lacs
Bengaluru
Work from Office
Job Title:AWS Data Engineer Experience5-10 Years Location:Bangalore : Technical Skills: 5 + Years of experience as AWS Data Engineer, AWS S3, Glue Catalog, Glue Crawler, Glue ETL, Athena write Glue ETLs to convert data in AWS RDS for SQL Server and Oracle DB to Parquet format in S3 Execute Glue crawlers to catalog S3 files. Create catalog of S3 files for easier querying Create SQL queries in Athena Define data lifecycle management for S3 files Strong experience in developing, debugging, and optimizing Glue ETL jobs using PySpark or Glue Studio. Ability to connect Glue ETLs with AWS RDS (SQL Server and Oracle) for data extraction and write transformed data into Parquet format in S3. Proficiency in setting up and managing Glue Crawlers to catalog data in S3. Deep understanding of S3 architecture and best practices for storing large datasets. Experience in partitioning and organizing data for efficient querying in S3. Knowledge of Parquet file format advantages for optimized storage and querying. Expertise in creating and managing the AWS Glue Data Catalog to enable structured and schema-aware querying of data in S3. Experience with Amazon Athena for writing complex SQL queries and optimizing query performance. Familiarity with creating views or transformations in Athena for business use cases. Knowledge of securing data in S3 using IAM policies, S3 bucket policies, and KMS encryption. Understanding of regulatory requirements (e.g., GDPR) and implementing secure data handling practices. Non-Technical Skills: Candidate needs to be Good Team Player Effective interpersonal, team building and communication skills. Ability to communicate complex technology to no tech audience in simple and precise manner.
Posted 3 weeks ago
4.0 - 6.0 years
2 - 6 Lacs
Hyderabad, Pune, Gurugram
Work from Office
Job Title:Sr AWS Data Engineer Experience4-6 Years Location:Pune, Hyderabad, Gurgaon, Bangalore [Hybrid] : PySpark, Python, SQL, AWS Services - S3, Athena, Glue, EMR/Spark, Redshift, Lambda, Step Functions, IAM, CloudWatch.
Posted 3 weeks ago
5.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Title:EMR_Spark SME Experience:5-10 Years Location:Bangalore : Technical Skills: 5+ years of experience in big data technologies with hands-on expertise in AWS EMR and Apache Spark. Proficiency in Spark Core, Spark SQL, and Spark Streaming for large-scale data processing. Strong experience with data formats (Parquet, Avro, JSON) and data storage solutions (Amazon S3, HDFS). Solid understanding of distributed systems architecture and cluster resource management (YARN). Familiarity with AWS services (S3, IAM, Lambda, Glue, Redshift, Athena). Experience in scripting and programming languages such as Python, Scala, and Java. Knowledge of containerization and orchestration (Docker, Kubernetes) is a plus. Architect and develop scalable data processing solutions using AWS EMR and Apache Spark. Optimize and tune Spark jobs for performance and cost efficiency on EMR clusters. Monitor, troubleshoot, and resolve issues related to EMR and Spark workloads. Implement best practices for cluster management, data partitioning, and job execution. Collaborate with data engineering and analytics teams to integrate Spark solutions with broader data ecosystems (S3, RDS, Redshift, Glue, etc.). Automate deployments and cluster management using infrastructure-as-code tools like CloudFormation, Terraform, and CI/CD pipelines. Ensure data security and governance in EMR and Spark environments in compliance with company policies. Provide technical leadership and mentorship to junior engineers and data analysts. Stay current with new AWS EMR features and Spark versions to recommend improvements and upgrades. Requirements and Skills Performance tuning and optimization of Spark jobs. Problem-solving skills with the ability to diagnose and resolve complex technical issues. Strong experience with version control systems (Git) and CI/CD pipelines. Excellent communication skills to explain technical concepts to both technical and non-technical audiences. Qualification: Education qualificationB.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed college. Certifications: AWS Certified Solutions Architect – Associate/Professional AWS Certified Data Analytics – Specialty
Posted 3 weeks ago
0.0 years
0 Lacs
Hyderabad
Work from Office
MEDICAL CODER / MEDICAL BILLER Job Description We are looking for a detail-oriented and proactive Eligibility Executive to manage insurance verification and benefits validation for patients in the revenue cycle process. The ideal candidate will have experience working with U.S. healthcare insurance systems, payer portals, and EHR platforms to ensure accurate eligibility checks and timely updates for claims processing. Key Responsibilities Verify patient insurance coverage and benefits through payer portals, IVR, or direct calls to insurance companies. Update and confirm insurance details in the practice management system or EHR platforms accurately and in a timely manner. Identify policy limitations, deductibles, co-pays, and co-insurance information and document clearly for billing teams. Coordinate with patients and internal teams (billing, front desk, scheduling) to clarify eligibility-related concerns. Perform eligibility checks for scheduled appointments, procedures, and recurring services. Handle real-time and batch eligibility verifications for various insurance types including commercial, Medicaid, Medicare, and TPA. Escalate discrepancies or inactive coverage to the concerned team and assist in resolving issues before claim submission. Maintain up-to-date knowledge of payer guidelines and insurance plan policies. Ensure strict adherence to HIPAA guidelines and maintain confidentiality of patient data. Meet assigned productivity and accuracy targets while following internal SOPs and compliance standards. Preferred Skills & Tools Experience with EHR/PM systems like eCW, NextGen, Athena, CMD Familiarity with major U.S. insurance carriers and payer portals Strong verbal and written communication skills Basic knowledge of medical billing and coding is a plus Ability to work in a fast-paced, detail-focused environment Qualifications ANY LIFE SCIENCE DEGREE BSc, MSc, B.Pharm, M.Pharm, BPT NOTE CPC certification preferable
Posted 3 weeks ago
4.0 - 9.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Minimum Qualifications: - BA/BSc/B.E./BTech degree from Tier I, II college in Computer Science, Statistics, Mathematics, Economics or related fields - 1to 4 years of experience in working with data and conducting statistical and/or numerical analysis - Strong understanding of how data can be stored and accessed in different structures - Experience with writing computer programs to solve problems - Strong understanding of data operations such as sub-setting, sorting, merging, aggregating and CRUD operations - Ability to write SQL code and familiarity with R/Python, Linux shell commands - Be willing and able to quickly learn about new businesses, database technologies and analysis techniques - Ability to tell a good story and support it with numbers and visuals - Strong oral and written communication Preferred Qualifications: - Experience working with large datasets - Experience with AWS analytics infrastructure (Redshift, S3, Athena, Boto3) - Experience building analytics applications leveraging R, Python, Tableau, Looker or other - Experience in geo-spatial analysis with POSTGIS, QGIS Apply Save Save Pro Insights
Posted 3 weeks ago
4.0 - 8.0 years
10 - 20 Lacs
Gurugram
Remote
US Shift- 5 working days. Remote Work. (US Airline Group) Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Strong focus on AWS and PySpark. Knowledge of AWS services, including but not limited to S3, Redshift, Athena, EMR, and Glue. Proficiency in PySpark and related Big Data technologies for ETL processing. Strong SQL skills for data manipulation and querying. Familiarity with data warehousing concepts and dimensional modeling. Experience with data governance, data quality, and data security practices. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills to work effectively with cross-functional teams.
Posted 3 weeks ago
2 - 6 years
10 - 14 Lacs
Hyderabad, Secunderabad
Work from Office
Digital Solutions Consultant I - HYD015A Company Worley Primary Location IND-AP-Hyderabad Job Digital Solutions Schedule Full-time Employment Type Agency Contractor Job Level Experienced Job Posting May 7, 2025 Unposting Date May 20, 2025 Reporting Manager Title Manager We deliver the worlds most complex projects. Work as part of a collaborative and inclusive team . Enjoy a varied & challenging role. Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia.Right now, were bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now.We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects.? The Role As a Power BI Developer with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc. We are seeking an experienced Power BI Developer with a strong skillset in creating visually compelling reports and dashboards, data modeling, and UI/UX design. The ideal candidate will have expertise in wireframing, UI design, and front-end development using React and CSS to complement their data analysis and visualization abilities in Power BI. Power BI Report Development: Design, develop, and maintain interactive dashboards and reports in Power BI that provide business insights. Leverage DAX, Power Query, and advanced data modeling techniques to build robust and scalable solutions. Create custom visuals and optimize Power BI performance for large datasets. UI/UX Design: Collaborate with product managers and stakeholders to define UI and UX requirements for data visualization. Design wireframes, prototypes, and interactive elements for Power BI reports and applications. Ensure designs are user-friendly, intuitive, and visually appealing. Data Modeling: Develop and maintain complex data models to support analytical and reporting needs. Ensure the integrity, accuracy, and consistency of data within Power BI reports. Implement ETL processes using Power Query for data transformation React & Front-End Development: Develop interactive front-end components and custom dashboards using React . Integrate React applications with Power BI APIs for seamless, embedded analytics experiences. Utilize CSS and modern front-end techniques to ensure responsive and visually engaging interfaces Collaboration & Problem-Solving: Work closely with cross-functional teams (data analysts, business analysts, project managers) to understand requirements and deliver solutions. Analyze business needs and translate them into effective data solutions and UI designs. Provide guidance and support in the best practices for data visualization, user experience, and data modeling. About You To be considered for this role it is envisaged you will possess the following attributes: Experience with AWS services and Power BI Service for deployment and sharing. Familiarity with other BI tools or frameworks (e.g., Tableau, Qlik, QuickSight). Basic understanding of back-end technologies and databases (e.g., SQL, NoSQL). Knowledge of Agile development methodologies. Bachelors degree in Computer Science, Information Technology, or a related field. Strong experience in Power BI (desktop and service), including Power Query, DAX, and data model design. Proficiency in UI/UX design with experience in creating wireframes, mockups, and interactive prototypes. Expertise in React for building interactive front-end applications and dashboards. Advanced knowledge of CSS for styling and creating visually responsive components. Strong understanding of data visualization best practices, including the ability to create meaningful and impactful reports. Experience working with large datasets and optimizing Power BI performance. Familiarity with Power BI APIs and embedding Power BI reports into web applications. Excellent communication and collaboration skills to work effectively in a team environment. Moving forward together We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. Were building a diverse, inclusive and respectful workplace. Creating a space where everyone feels they belong, can be themselves, and are heard. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, theres a path for you here. And theres no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Worley takes personal data protection seriously and respects EU and local data protection laws. You can read our full Recruitment Privacy Notice Please noteIf you are being represented by a recruitment agency you will not be considered, to be considered you will need to apply directly to Worley.
Posted 1 month ago
4 - 6 years
4 - 8 Lacs
Bengaluru
Work from Office
Data Engineer | 4 to 6 years | Bengaluru Job description 4+ years of microservices development experience in two of thesePython, Java, Scala 4+ years of experience building data pipelines, CICD pipelines, and fit for purpose data stores 4+ years of experience with Big Data TechnologiesApache Spark, Hadoop, or Kafka 3+ years of experience with Relational & Non-relational DatabasesPostgres, MySQL, NoSQL (DynamoDB or MongoDB) 3+ years of experience working with data consumption patterns 3+ years of experience working with automated build and continuous integration systems 2+ years of experience in Cloud technologiesAWS (Terraform, S3, EMR, EKS, EC2, Glue, Athena) Primary Skills: Python, Java, Scala, data pipelines, Apache Spark, Hadoop, or Kafka , Postgres, MySQL, NoSQL Secondary Skills: Snowflake , Redshift ,Relation Data Modelling, Dimensional Data Modeling Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.
Posted 1 month ago
3 - 6 years
4 - 8 Lacs
Bengaluru
Work from Office
About The Role Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. About The Role - Grade Specific The primary focus is to help organizations design, develop, and optimize their data infrastructure and systems. They help organizations enhance data processes, and leverage data effectively to drive business outcomes. Skills (competencies) Industry Standard Data Modeling (FSLDM) Ab Initio Industry Standard Data Modeling (IBM FSDM)) Agile (Software Development Framework) Influencing Apache Hadoop Informatica IICS AWS Airflow Inmon methodology AWS Athena JavaScript AWS Code Pipeline Jenkins AWS EFS Kimball AWS EMR Linux - Redhat AWS Redshift Negotiation AWS S3 Netezza Azure ADLS Gen2 NewSQL Azure Data Factory Oracle Exadata Azure Data Lake Storage Performance Tuning Azure Databricks Perl Azure Event Hub Platform Update Management Azure Stream Analytics Project Management Azure Sunapse PySpark Bitbucket Python Change Management R Client Centricity RDD Optimization Collaboration SantOs Continuous Integration and Continuous Delivery (CI/CD) SaS Data Architecture Patterns Scala Spark Data Format Analysis Shell Script Data Governance Snowflake Data Modeling SPARK Data Validation SPARK Code Optimization Data Vault Modeling SQL Database Schema Design Stakeholder Management Decision-Making Sun Solaris DevOps Synapse Dimensional Modeling Talend GCP Big Table Teradata GCP BigQuery Time Management GCP Cloud Storage Ubuntu GCP DataFlow Vendor Management GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2
Posted 1 month ago
6 - 10 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced Amazon Redshift Developer / Data Engineer to design, develop, and optimize cloud-based data warehousing solutions. The ideal candidate should have expertise in Amazon Redshift, ETL processes, SQL optimization, and cloud-based data lake architectures. This role involves working with large-scale datasets, performance tuning, and building scalable data pipelines. Key Responsibilities: Design, develop, and maintain data models, schemas, and stored procedures in Amazon Redshift. Optimize Redshift performance using distribution styles, sort keys, and compression techniques. Build and maintain ETL/ELT data pipelines using AWS Glue, AWS Lambda, Apache Airflow, and dbt. Develop complex SQL queries, stored procedures, and materialized views for data transformations. Integrate Redshift with AWS services such as S3, Athena, Glue, Kinesis, and DynamoDB. Implement data partitioning, clustering, and query tuning strategies for optimal performance. Ensure data security, governance, and compliance (GDPR, HIPAA, CCPA, etc.). Work with data scientists and analysts to support BI tools like QuickSight, Tableau, and Power BI. Monitor Redshift clusters, troubleshoot performance issues, and implement cost-saving strategies. Automate data ingestion, transformations, and warehouse maintenance tasks. Required Skills & Qualifications: 6+ years of experience in data warehousing, ETL, and data engineering. Strong hands-on experience with Amazon Redshift and AWS data services. Expertise in SQL performance tuning, indexing, and query optimization. Experience with ETL/ELT tools like AWS Glue, Apache Airflow, dbt, or Talend. Knowledge of big data processing frameworks (Spark, EMR, Presto, Athena). Familiarity with data lake architectures and modern data stack. Proficiency in Python, Shell scripting, or PySpark for automation. Experience working in Agile/DevOps environments with CI/CD pipelines.
Posted 1 month ago
5 - 7 years
8 - 10 Lacs
Noida
Work from Office
What you need BS in an Engineering or Science discipline, or equivalent experience 5+ years of software/data engineering experience using Java, Scala, and/or Python, with at least 3 years experience in a data and BI focused role Experience in data integration (ETL/ELT) development using multiple languages (e.g., Python, PySpark, SparkSQL) and data transformation (e.g., dbt) Experience building data pipelines supporting a variety of integration and information delivery methods as well as data modelling techniques and analytics Knowledge and experience with various relational databases and demonstrable proficiency in SQL and data analysis requiring complex queries, and optimization Experience with AWS-based data services technologies (e.g., Glue, RDS, Athena, etc.) and Snowflake CDW, as well as BI tools (e.g., PowerBI) Willingness to experiment and learn new approaches and technology applications Knowledge of software engineering and agile development best practices Excellent written and verbal communication skills
Posted 1 month ago
4 - 9 years
14 - 18 Lacs
Noida
Work from Office
Who We Are Build a brighter future while learning and growing with a Siemens company at the intersection of technology, community and s ustainability. Our global team of innovators is always looking to create meaningful solutions to some of the toughest challenges facing our world. Find out how far your passion can take you. What you need * BS in an Engineering or Science discipline, or equivalent experience * 7+ years of software/data engineering experience using Java, Scala, and/or Python, with at least 5 years' experience in a data focused role * Experience in data integration (ETL/ELT) development using multiple languages (e.g., Java, Scala, Python, PySpark, SparkSQL) * Experience building and maintaining data pipelines supporting a variety of integration patterns (batch, replication/CD C, event streaming) and data lake/warehouse in production environments * Experience with AWS-based data services technologies (e.g., Kinesis, Glue, RDS, Athena, etc.) and Snowflake CDW * Experience of working in the larger initiatives building and rationalizing large scale data environments with a large variety of data pipelines, possibly with internal and external partner integrations, would be a plus * Willingness to experiment and learn new approaches and technology applications * Knowledge and experience with various relational databases and demonstrable proficiency in SQL and supporting analytics uses and users * Knowledge of software engineering and agile development best practices * Excellent written and verbal communication skills The Brightly culture We"™re guided by a vision of community that serves the ambitions and wellbeing of all people, and our professional communities are no exception. We model that ideal every day by being supportive, collaborative partners to one another, conscientiousl y making space for our colleagues to grow and thrive. Our passionate team is driven to create a future where smarter infrastructure protects the environments that shape and connect us all. That brighter future starts with us.
Posted 1 month ago
2 - 5 years
3 - 7 Lacs
Gurugram
Work from Office
Role Data Engineer Skills: Data Modeling:* Design and implement efficient data models, ensuring data accuracy and optimal performance. ETL Development:* Develop, maintain, and optimize ETL processes to extract, transform, and load data from various sources into our data warehouse. SQL Expertise:* Write complex SQL queries to extract, manipulate, and analyze data as needed. Python Development:* Develop and maintain Python scripts and applications to support data processing and automation. AWS Expertise:* Leverage your deep knowledge of AWS services, such as S3, Redshift, Glue, EMR, and Athena, to build and maintain data pipelines and infrastructure. Infrastructure as Code (IaC):* Experience with tools like Terraform or CloudFormation to automate the provisioning and management of AWS resources is a plus. Big Data Processing:* Knowledge of PySpark for big data processing and analysis is desirable. Source Code Management:* Utilize Git and GitHub for version control and collaboration on data engineering projects. Performance Optimization:* Identify and implement optimizations for data processing pipelines to enhance efficiency and reduce costs. Data Quality:* Implement data quality checks and validation procedures to maintain data integrity. Collaboration:* Work closely with data scientists, analysts, and other teams to understand data requirements and deliver high-quality data solutions. Documentation:* Maintain comprehensive documentation for all data engineering processes and projects.
Posted 1 month ago
5 - 8 years
5 - 15 Lacs
Pune, Chennai
Work from Office
• SQL: 2-4 years of experience • Spark: 1-2 years of experience • NoSQL Databases: 1-2 years of experience • Database Architecture: 2-3 years of experience • Cloud Architecture: 1-2 years of experience • Experience in programming language like Python • Good Understanding of ETL (Extract, Transform, Load) concepts • Good analytical and problem-solving skills • Inclination for learning & be self-motivated. • Knowledge of ticketing tool like JIRA/SNOW. • Good communication skills to interact with Customers on issues & requirements. Good to Have: • Knowledge/Experience in Scala.
Posted 1 month ago
7 - 9 years
14 - 24 Lacs
Chennai
Work from Office
Experience Range: 4-8 years in Data Quality Engineering Job Summary: As a Senior Data Quality Engineer, you will play a key role in ensuring the reliability and accuracy of our data platform and projects. Your primary responsibility will be developing and leading the product testing strategy while leveraging your technical expertise in AWS and big data technologies. You will also guide the team in implementing shift-left testing using Behavior-Driven Development (BDD) methodologies integrated with AWS CodeBuild CI/CD. Your contributions will ensure the successful execution of testing across multiple data platforms and projects. Key Responsibilities: Develop Product Testing Strategy: Collaborate with stakeholders to define and implement the product testing strategy. Identify key platform and project responsibilities, ensuring a comprehensive and effective testing approach. Lead Testing Strategy Implementation: Take charge of implementing the testing strategy across data platforms and projects, ensuring thorough coverage and timely completion of tasks. BDD & AWS Integration: Utilize Behavior-Driven Development (BDD) methodologies to drive shift-left testing and integrate AWS services such as AWS Glue, Lambda, Airflow jobs, Athena, Quicksight, Amazon Redshift, DynamoDB, Parquet, and Spark to improve test effectiveness. Test Execution & Reporting: Design, execute, and document test cases while providing comprehensive reporting on testing results. Collaborate with the team to identify the appropriate data for testing and manage test environments. Collaboration with Developers: Work closely with application developers and technical support to analyze and resolve identified issues in a timely manner. Automation Solutions: Create and maintain automated test cases, enhancing the test automation process to improve testing efficiency. Must-Have Skills: Big Data Platform Expertise: At least 2 years of experience as a technical test lead working on a big data platform, preferably with direct experience in AWS. Strong Programming Skills: Proficiency in object-oriented programming, particularly with Python. Ability to use programming skills to enhance test automation and tooling. BDD & AWS Integration: Experience with Behavior-Driven Development (BDD) practices and AWS technologies, including AWS Glue, Lambda, Airflow, Athena, Quicksight, Amazon Redshift, DynamoDB, Parquet, and Spark. Testing Frameworks & Tools: Familiarity with testing frameworks such as PyTest, PyTest-BDD, and CI/CD tools like AWS CodeBuild and Harness. Communication Skills: Exceptional communication skills with the ability to convey complex technical concepts to both technical and non-technical stakeholders. Good-to-Have Skills: Automation Engineering: Expertise in creating automation testing solutions to improve testing efficiency. Experience with Test Management: Knowledge of test management processes, including test case design, execution, and defect tracking. Agile Methodologies: Experience working in Agile environments, with familiarity in using Agile tools such as Jira to track stories, bugs, and progress. Experience Range: Minimum Requirements: Bachelors degree in Computer Science or related field, or HS/GED with 8 years of experience in Data Quality Engineering. At least 4 years of experience in big data platforms and test engineering, with a strong focus on AWS and Python. Skills Test Automation,Python,Data Engineering
Posted 1 month ago
7 - 9 years
14 - 24 Lacs
Chennai
Work from Office
Job Summary: As a Senior Data Quality Engineer, you will play a crucial role in ensuring the reliability and accuracy of our data platform and projects. Your primary responsibilities will involve developing and leading the product testing strategy, leveraging your technical expertise in AWS and big data technologies. You will also work closely with the team to implement shift-left testing using Behavior-Driven Development (BDD) methodologies integrated with AWS CodeBuild CI/CD. Key Responsibilities: Develop Product Testing Strategy: Collaborate with stakeholders to define and design the product testing strategy, identifying key platform and project responsibilities. Lead Testing Strategy Implementation: Take charge of implementing the testing strategy, ensuring its successful execution across the data platform and projects. Oversee and coordinate testing tasks to ensure thorough coverage and timely completion. BDD and AWS Integration: Guide the team in utilizing Behavior-Driven Development (BDD) practices for shift-left testing. Leverage AWS services (e.g., AWS Glue, Lambda, Airflow, Athena, Quicksight, Redshift, DynamoDB, Parquet, Spark) to enhance testing effectiveness. Test Case Management: Work with the team to identify and prepare data for testing, create/maintain automated test cases, execute test cases, and document results. Problem Resolution: Assist developers and technical support staff in resolving identified issues in a timely manner. Automation Engineering Solutions: Create test automation solutions that improve the efficiency and coverage of testing efforts. Must-Have Skills: Big Data Platform Expertise: At least 2 years of experience as a technical test lead working on a big data platform, preferably with direct experience in AWS. AI/ML Familiarity: Experience with AI/ML concepts and practical experience working on AI/ML-driven initiatives. Synthetic Test Data Creation: Knowledge of synthetic data tooling, test data generation, and best practices. Offshore Team Leadership: Proven ability to lead and collaborate with offshore teams, managing projects with limited real data access. Programming Expertise: Strong proficiency in object-oriented programming, particularly with Python. Testing Tools/Frameworks: Familiarity with tools like PyTest, PyTest-BDD, AWS CodeBuild, and Harness. Excellent Communication: Ability to communicate effectively with both technical and non-technical stakeholders, explaining complex technical concepts in simple terms. Good-to-Have Skills: Experience with AWS Services: Familiarity with AWS DL/DW components like AWS Glue, Lambda, Airflow jobs, Athena, Quicksight, Amazon Redshift, DynamoDB, Parquet, and Spark. Test Automation Experience: Practical experience in implementing test automation frameworks for complex data platforms and systems. Shift-Left Testing Knowledge: Experience in implementing shift-left testing strategies, particularly using Behavior-Driven Development (BDD) methodologies. Project Management: Ability to manage multiple testing projects simultaneously while ensuring the accuracy and quality of deliverables. Minimum Requirements: Bachelors in Computer Science and 4 years of relevant experience, or High School/GED with 8 years of relevant experience. Relevant Experience: Big Data platform testing, test strategy leadership, automation, and working with AWS services and AI/ML concepts. Skills Test Automation,Python,Data Engineering
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
India's job market for athena professionals is thriving, with numerous opportunities available for individuals skilled in this area. From entry-level positions to senior roles, companies across various industries are actively seeking talent with expertise in athena to drive their businesses forward.
The average salary range for athena professionals in India varies based on experience and expertise. Entry-level positions can expect to earn around INR 4-7 lakhs per annum, while experienced professionals can command salaries ranging from INR 10-20 lakhs per annum.
In the field of athena, a typical career progression may include roles such as Junior Developer, Developer, Senior Developer, Tech Lead, and eventually reaching positions like Architect or Manager. Continuous learning and upskilling are essential to advance in this field.
Apart from proficiency in athena, professionals in this field are often expected to have skills such as SQL, data analysis, data visualization, AWS, and Python. Strong problem-solving abilities and attention to detail are also highly valued in athena roles.
As you explore opportunities in the athena job market in India, remember to showcase your expertise, skills, and enthusiasm for the field during interviews. With the right preparation and confidence, you can land your dream job in this dynamic and rewarding industry. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2