Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 7.0 years
25 - 27 Lacs
Bengaluru
Remote
4+ YOE as a Data Engineer/Scientist, hands-on experience working on Data Warehousing, Data ingestion, Data processing, Data Lakes Must have strong development experience using Python. and SQL, understanding of data orchestration tools like Airflow Required Candidate profile Experience with data extraction techniques - CDC, batch-based, Debezium, Kafka Connect, AWS DMS, queuing/messaging systems - SQS, RabbitMQ, Kinesis, AWS, Data/ML - AWS Glue, MWAA, Athena, Redshift
Posted 1 month ago
3.0 - 8.0 years
5 - 15 Lacs
Hyderabad
Work from Office
Dear Candidates, We are conducting a Walk-In Interview in Hyderabad for the position of Data Engineering on 20th/21st/22nd June 2025 . Position: Data Engineering Job description: Expert knowledge in AWS Data Lake implementation and support (S3, Glue,DMS Athena, Lambda, API Gateway, Redshift) Handling of data related activities such as data parsing, cleansing quality definition data pipelines, storage and ETL scripts Experiences in programming language Python/Pyspark/SQL Experience with data migration with hands-on experience Experiences in consuming rest API using various authentication options with in AWS Lambda architecture orchestrate triggers, debug and schedule batch job using a AWS Glue, Lambda and step functions understanding of AWS security features such as IAM roles and policies Exposure to Devops tools AWS certification in AWS is highly preferred Mandatory skills for Data engineer: Python/Pyspark, Aws Glue, lambda , redshift. Date: 20th June 2025 to 22nd June 2025 Time : 9.00 AM to 6.00 PM Eligibility: Any Graduate Experience : 2- 10 Years Gender: ANY Interested candidates can walk in directly. For any queries, please contact us at +91 7349369478/ 8555079906 Interview Venue Details: Selectify Analytics Address: Capital Park (Jain Sadguru Capital Park) Ayyappa Society, Silicon Valley, Madhapur, Hyderabad, Telangana 500081 Contact Person: Mr. Deepak/Saqeeb/Ravi Kumar Interview Time: 9.00 AM to 6.00 PM Contact Number : +91 7349369478/ 8555079906
Posted 1 month ago
3.0 - 8.0 years
6 - 18 Lacs
Hyderabad
Work from Office
Mandatory skills for Data engineer: Python/Pyspark, Aws Glue, lambda , redshift. Python/Pyspark, Aws Glue, lambda , redshift, SQL. Expert knowledge in AWS Data Lake implementation and support (S3, Glue,DMS Athena, Lambda, API Gateway, Redshift)
Posted 1 month ago
7.0 - 12.0 years
6 - 16 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
Role & responsibilities Total Experience: 7 yrs Relevant experience: 6 yrs Mandatory skills: Hadoop Apache Spark, Hive AWS cloud services including S3, Redshift, EMR etc SQL Nice to have: Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow / Control-M. Practical experience in Core Java /Python/Scala. Experience with SDLC tools (e.g. Bamboo, JIRA, GIT, Confluence, Bitbucket) Having experience in Data Modelling, Data Quality, Load Assurance. Ability to communicate problems and solutions effectively with both business and technical stakeholders (written and verbal) Added advantages : Scheduling tools - control-m, airflow Monitoring tools : Sumologic, SPLUNK Process familiarity of Incident Management Job Description: Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Having experience in AWS cloud services including S3, Redshift, EMR etc. Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow / Control-M. Practical experience in Core Java /Python/Scala. Experience with SDLC tools (e.g. Bamboo, JIRA, GIT, Confluence, Bitbucket) Having experience in Data Modelling, Data Quality, Load Assurance. Ability to communicate problems and solutions effectively with both business and technical stakeholders (written and verbal) Added advantages : Scheduling tools - control-m, airflow Monitoring tools : Sumologic, SPLUNK Process familiarity of Incident Management
Posted 1 month ago
8.0 - 13.0 years
35 - 45 Lacs
Kochi
Work from Office
Exp in Python programming for data tasks. Exp in AWS data technologies, including AWS Data Lakehouse, S3, Redshift, Athena etc and exposure to equivalent Azure technologies Exp in working with applications built on microservices architecture Required Candidate profile Extensive experience with ETL and Data Engineering tools and processes, particularly within the AWS ecosystem including AWS Glue, Athena, AWS Data Pipeline, AWS Lambda etc.
Posted 1 month ago
3.0 - 6.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Experience with data visualization using Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, , QuickSight, , PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language e.g., Python , 3+ years of analyzing and interpreting data with Redshift, NoSQL etc. experience
Posted 1 month ago
3.0 - 6.0 years
15 - 20 Lacs
Bengaluru
Remote
Key Responsibilities: Develop, design, and maintain dashboards and reports using Tableau and Power BI to support business decision-making. Write and optimize complex SQL queries to extract, manipulate, and analyze data from multiple sources. Collaborate with cross-functional teams to understand business needs and translate them into effective data solutions. Work with AWS Redshift and Databricks for data extraction, transformation, and loading (ETL) processes. Proactively identify and resolve data issues, acting as a solution finder to overcome challenges and drive improvements. Work independently, taking ownership of tasks and ensuring high-quality deliverables within deadlines. Be a strong team player, contributing to team knowledge sharing and fostering a collaborative environment. Apply knowledge of US healthcare systems to help build relevant data solutions and insights. Required Skills & Qualifications: Minimum 3 years of experience in data analysis, business intelligence, or related roles. Strong expertise in SQL for data querying and manipulation. Extensive experience creating dashboards and reports using Tableau and Power BI . Hands-on experience working with AWS Redshift and Databricks . Proven problem-solving skills with a focus on providing actionable data solutions. Self-motivated and able to work independently, while being a proactive team player. Experience or strong understanding of US healthcare systems and data-related needs. Excellent communication skills with the ability to work across different teams and stakeholders. Desired Skills (Nice to Have): Familiarity with other BI tools or cloud platforms. Experience in healthcare data analysis or healthcare analytics.
Posted 1 month ago
7.0 - 12.0 years
15 - 25 Lacs
Mumbai, Pune, Mumbai (All Areas)
Work from Office
*Seeking a Data Architect to design scalable data models, build pipelines, ensure governance & security, and optimize performance across cloud/data platforms. Collaborate with teams, drive innovation, lead data strategy & mentor others.
Posted 1 month ago
6.0 - 11.0 years
10 - 20 Lacs
Hyderabad
Hybrid
Business Data Analyst - HealthCare Position Description Job Summary We are seeking an experienced and results-driven Business Data Analyst with 5+ years of hands-on experience in data analytics, visualization, and business insight generation. This role is ideal for someone who thrives at the intersection of business and datatranslating complex data sets into compelling insights, dashboards, and strategies that support decision-making across the organization. You will collaborate closely with stakeholders across departments to identify business needs, design and build analytical solutions, and tell compelling data stories using advanced visualization tools. Key Responsibilities Data Analytics & Insights Analyze large and complex data sets to identify trends, anomalies, and opportunities that help drive business strategy and operational efficiency. • Dashboard Development & Data Visualization Design, develop, and maintain interactive dashboards and visual reports using tools like Power BI, Tableau, or Looker to enable data-driven decisions. • Business Stakeholder Engagement Collaborate with cross-functional teams to understand business goals, define metrics, and convert ambiguous requirements into concrete analytical deliverables. • KPI Definition & Performance Monitoring Define, track, and report key performance indicators (KPIs), ensuring alignment with business objectives and consistent measurement across teams. • Data Modeling & Reporting Automation Work with data engineering and BI teams to create scalable, reusable data models and automate recurring reports and analysis processes. • Storytelling with Data Communicate findings through clear narratives supported by data visualizations and actionable recommendations to both technical and non-technical audiences. • Data Quality & Governance Ensure accuracy, consistency, and integrity of data through validation, testing, and documentation practices. Required Qualifications Bachelor’s or Master’s degree in Business, Economics, Statistics, Computer Science, Information Systems, or a related field. • 5+ years of professional experience in a data analyst or business analyst role with a focus on data visualization and analytics. • Proficiency in data visualization tools: Power BI, Tableau, Looker (at least one). • Strong experience in SQL and working with relational databases to extract, manipulate, and analyze data. • Deep understanding of business processes, KPIs, and analytical methods. • Excellent problem-solving skills with attention to detail and accuracy. • Strong communication and stakeholder management skills with the ability to explain technical concepts in a clear and business-friendly manner. • Experience working in Agile or fast-paced environments. Preferred Qualifications Experience working with cloud data platforms (e.g., Snowflake, BigQuery, Redshift). • Exposure to Python or R for data manipulation and statistical analysis. • Knowledge of data warehousing, dimensional modeling, or ELT/ETL processes. • Domain experience in Healthcare is a plus.
Posted 1 month ago
5.0 - 8.0 years
10 - 17 Lacs
Pune
Remote
Were Hiring! | Senior Data Engineer (Remote) Location: Remote | Shift: US - CST Time | Department: Data Engineering Are you a data powerhouse who thrives on solving complex data challenges? Do you love working with Python, AWS, and cutting-edge data tools? If yes, Atidiv wants YOU! Were looking for a Senior Data Engineer to build and scale data pipelines, transform how we manage data lakes and warehouses, and power real-time data experiences across our products. What Youll Do: Architect and develop robust, scalable data pipelines using Python & PySpark Drive real-time & batch data ingestion from diverse data sources Build and manage data lakes and data warehouses using AWS (S3, Glue, Redshift, EMR, Lambda, Kinesis) Write high-performance SQL queries and optimize ETL/ELT jobs Collaborate with data scientists, analysts, and engineers to ensure high data quality and availability Implement monitoring, logging & alerting for workflows Ensure top-tier data security, compliance & governance What We’re Looking For: 5+ years of hands-on experience in Data Engineering Strong skills in Python, DBT, SQL , and working with Snowflake Proven experience with Airflow, Kafka/Kinesis , and AWS ecosystem Deep understanding of CI/CD practices Passion for clean code, automation , and scalable systems Why Join Atidiv? 100% Remote | Flexible Work Culture Opportunity to work with cutting-edge technologies Collaborative, supportive team that values innovation and ownership Work on high-impact, global projects Ready to transform data into impact? Send your resume to: nitish.pati@atidiv.com
Posted 1 month ago
5.0 - 10.0 years
25 - 35 Lacs
Hyderabad, Bengaluru
Work from Office
**URGENT hiring** Note: This is work from office opportunity. Apply only of your okay with it Must have Skills: MySQL, PostgreSQL, NoSQL, and Redshift Location: Bangalore/Hyderabad Years of experience: 5+ Years Notice period: immediate to 15 days Role Overview: The Database Engineer (DBE) is responsible for the design, implementation, maintenance, and optimization of databases to ensure high availability, security, and performance. The role involves working with relational and NoSQL databases, managing backups, monitoring performance, and ensuring data integrity. Key Responsibilities: Database Administration & Maintenance • Install, configure, and maintain database management systems (DBMS) such as MySQL, PostgreSQL, SQL Server, Oracle, or MongoDB. • Ensure database security, backup, and disaster recovery strategies are in place. • Monitor database performance and optimize queries, indexing, and storage. • Apply patches, updates, and upgrades to ensure system stability and security. Database Design & Development • Design and implement database schemas, tables, and relationships based on business requirements. • Develop and optimize stored procedures, functions, and triggers. • Implement data partitioning, replication, and sharding strategies for scalability. Performance Tuning & Optimization • Analyze slow queries and optimize database performance using indexing, caching, and tuning techniques. • Conduct database capacity planning and resource allocation. • Monitor and troubleshoot database-related issues, ensuring minimal downtime. Security & Compliance • Implement role-based access control (RBAC) and manage user permissions. • Ensure databases comply with security policies, including encryption, auditing, and GDPR/HIPAA regulations. • Conduct regular security assessments and vulnerability scans. Collaboration & Automation • Work closely with developers, system administrators, and DevOps teams to integrate databases with applications. • Automate database management tasks using scripts and tools. • Document database configurations, processes, and best practices. Required Skills & Qualifications: • Experience: 4+ years of experience in database administration, engineering, or related fields. • Education: Bachelors or Master’s degree in Computer Science, Information Technology, or related disciplines. • Technical Skills: • Strong knowledge of SQL and database optimization techniques. • Hands-on experience with at least one major RDBMS (MySQL, PostgreSQL, SQL Server, Oracle). • Experience with NoSQL databases (MongoDB, Cassandra, DynamoDB) is a plus. • Proficiency in database backup, recovery, and high availability solutions (Replication, Clustering, Mirroring). • Familiarity with scripting languages (Python, Bash, PowerShell) for automation. • Experience with cloud-based database solutions (AWS RDS, Azure SQL, Google Cloud Spanner). Preferred Qualifications: • Experience with database migration and cloud transformation projects. • Knowledge of CI/CD pipelines and DevOps methodologies for database management. • Familiarity with big data technologies like Hadoop, Spark, or Elasticsearch.
Posted 1 month ago
3.0 - 6.0 years
20 - 25 Lacs
Bengaluru
Remote
Job Description: We are looking for a talented and motivated Data Analyst / BI Developer with 3-6 years of experience to join our team. The ideal candidate will have a strong background in SQL, experience with dashboard creation using Tableau, and hands-on knowledge of AWS Redshift and Databricks. A problem-solver with excellent solution-finding abilities and a proactive, independent work ethic is essential. As a key contributor to the team, you will work with various stakeholders to deliver actionable insights, and drive data-driven decision-making within the organization. A strong understanding of US healthcare is a plus. Key Responsibilities: Develop, design, and maintain dashboards and reports using Tableau to support business decision-making. Write and optimize complex SQL queries to extract, manipulate, and analyze data from multiple sources. Collaborate with cross-functional teams to understand business needs and translate them into effective data solutions. Work with AWS Redshift and Databricks for data extraction, transformation, and loading (ETL) processes. Proactively identify and resolve data issues, acting as a solution finder to overcome challenges and drive improvements. Work independently, taking ownership of tasks and ensuring high-quality deliverables within deadlines. Be a strong team player, contributing to team knowledge sharing and fostering a collaborative environment. Apply knowledge of US healthcare systems to help build relevant data solutions and insights. Required Skills & Qualifications: Minimum 3 years of experience in data analysis, business intelligence, or related roles. Strong expertise in SQL for data querying and manipulation. Extensive experience creating dashboards and reports using Tableau . Hands-on experience working with AWS Redshift and Databricks . Proven problem-solving skills with a focus on providing actionable data solutions. Self-motivated and able to work independently, while being a proactive team player. Experience or strong understanding of US healthcare systems and data-related needs. Excellent communication skills with the ability to work across different teams and stakeholders. Desired Skills (Nice to Have): Familiarity with other BI tools or cloud platforms. Experience in healthcare data analysis or healthcare analytics.
Posted 1 month ago
5.0 - 8.0 years
18 - 30 Lacs
Hyderabad
Work from Office
AWS Data Engineer with Glue, Terraform, Business Intelligence (Tableau) development * Design, develop & maintain AWS data pipelines using Glue, Lambda & Redshift * Collaborate with BI team on ETL processes & dashboard creation with Tableau
Posted 1 month ago
4.0 - 7.0 years
10 - 20 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Must-Have Qualifications: AWS Expertise: Strong hands-on experience with AWS data services including Glue, Redshift, Athena, S3, Lake Formation, Kinesis, Lambda, Step Functions, EMR , and CloudWatch . ETL/ELT Engineering: Deep proficiency in designing robust ETL/ELT pipelines with AWS Glue (PySpark/Scala), Python, dbt , or other automation frameworks. Data Modeling: Advanced knowledge of dimensional (Star/Snowflake) and normalised data modeling, optimised for Redshift and S3-based lakehouses . Programming Skills: Proficient in Python, SQL, and PySpark , with automation and scripting skills for data workflows. Architecture Leadership: Demonstrated experience leading large-scale AWS data engineering projects across teams and domains. Pre-sales & Consulting: Proven experience working with clients, responding to technical RFPs, and designing cloud-native data solutions. Advanced PySpark Expertise: Deep hands-on experience in writing optimized PySpark code for distributed data processing, including transformation pipelines using DataFrames , RDDs , and Spark SQL , with a strong grasp of lazy evaluation , catalyst optimizer , and Tungsten execution engine . Performance Tuning & Partitioning: Proven ability to debug and optimize Spark jobs through custom partitioning strategies , broadcast joins , caching , and checkpointing , with proficiency in tuning executor memory , shuffle configurations , and leveraging Spark UI for performance diagnostics in large-scale data workloads (>TB scale).
Posted 1 month ago
6.0 - 10.0 years
7 - 14 Lacs
Bengaluru
Hybrid
Roles and Responsibilities Architect and incorporate an effective Data framework enabling end to end Data Solution. Understand business needs, use cases and drivers for insights and translate them into detailed technical specifications. Create epics, features and user stories with clear acceptance criteria for execution and delivery by the data engineering team. Create scalable and robust data solution designs that incorporate governance, security and compliance aspects. Develop and maintain logical and physical data models and work closely with data engineers, data analysts and data testers for successful implementation of them. Analyze, assess and design data integration strategies across various sources and platforms. Create project plans and timelines while monitoring and mitigating risks and controlling progress of the project. Conduct daily scrum with the team with a clear focus on meeting sprint goals and timely resolution of impediments. Act as a liaison between technical teams and business stakeholders and ensure. Guide and mentor the team for best practices on Data solutions and delivery frameworks. Actively work, facilitate and support the stakeholders/ clients to complete User Acceptance Testing ensure there is strong adoption of the data products after the launch. Defining and measuring KPIs/KRA for feature(s) and ensuring the Data roadmap is verified through measurable outcomes Prerequisites 5 to 8 years of professional, hands on experience building end to end Data Solution on Cloud based Data Platforms including 2+ years working in a Data Architect role. Proven hands on experience in building pipelines for Data Lakes, Data Lake Houses, Data Warehouses and Data Visualization solutions Sound understanding of modern Data technologies like Databricks, Snowflake, Data Mesh and Data Fabric. Experience in managing Data Life Cycle in a fast-paced, Agile / Scrum environment. Excellent spoken and written communication, receptive listening skills, and ability to convey complex ideas in a clear, concise fashion to technical and non-technical audiences Ability to collaborate and work effectively with cross functional teams, project stakeholders and end users for quality deliverables withing stipulated timelines Ability to manage, coach and mentor a team of Data Engineers, Data Testers and Data Analysts. Strong process driver with expertise in Agile/Scrum framework on tools like Azure DevOps, Jira or Confluence Exposure to Machine Learning, Gen AI and modern AI based solutions. Experience Technical Lead Data Analytics with 6+ years of overall experience out of which 2+ years is on Data architecture. Education Engineering degree from a Tier 1 institute preferred. Compensation The compensation structure will be as per industry standards
Posted 1 month ago
5.0 - 8.0 years
1 - 1 Lacs
Hyderabad
Hybrid
Location: Hyderabad (Hybrid) Key Responsibilities: 1. Data Engineering (AWS Glue & AWS Services): Design, develop, and optimize ETL pipelines using AWS Glue (PySpark). Manage and transform structured and unstructured data from multiple sources into AWS S3, Redshift, or Snowflake. Work with AWS Lambda, S3, Athena, Redshift for data orchestration. Implement data lake and data warehouse solutions in AWS. 2. Infrastructure as Code (Terraform & AWS Services): Design and deploy AWS infrastructure using Terraform. Automate resource provisioning and manage Infrastructure as Code. Monitor and optimize cloud costs, security, and compliance. Maintain and improve CI/CD pipelines for deploying data applications. 3. Business Intelligence (Tableau Development & Administration): Develop interactive dashboards and reports using Tableau. Connect Tableau with AWS data sources such as Redshift, Athena, and Snowflake. Optimize SQL queries and extracts for performance efficiency. Manage Tableau Server administration, including security, access controls, and performance tuning. Required Skills & Experience: 5+ years of experience in AWS Data Engineering with Glue, Redshift, and S3. Strong expertise in ETL development using AWS Glue (PySpark, Scala, or Python). Experience with Terraform for AWS infrastructure automation. Proficiency in SQL, Python, or Scala for data processing. Hands-on experience in Tableau development & administration. Strong understanding of cloud security, IAM roles, and permissions. Experience with CI/CD pipelines (Git, Jenkins, AWS Code Pipeline, etc.). Knowledge of data modeling, warehousing, and performance optimization. Please share your resume to: +91 9361912009
Posted 1 month ago
9.0 - 12.0 years
35 - 40 Lacs
Bengaluru
Work from Office
We are seeking an experienced AWS Architect with a strong background in designing and implementing cloud-native data platforms. The ideal candidate should possess deep expertise in AWS services such as S3, Redshift, Aurora, Glue, and Lambda, along with hands-on experience in data engineering and orchestration tools. Strong communication and stakeholder management skills are essential for this role. Key Responsibilities Design and implement end-to-end data platforms leveraging AWS services. Lead architecture discussions and ensure scalability, reliability, and cost-effectiveness. Develop and optimize solutions using Redshift, including stored procedures, federated queries, and Redshift Data API. Utilize AWS Glue and Lambda functions to build ETL/ELT pipelines. Write efficient Python code and data frame transformations, along with unit testing. Manage orchestration tools such as AWS Step Functions and Airflow. Perform Redshift performance tuning to ensure optimal query execution. Collaborate with stakeholders to understand requirements and communicate technical solutions clearly. Required Skills & Qualifications Minimum 9 years of IT experience with proven AWS expertise. Hands-on experience with AWS services: S3, Redshift, Aurora, Glue, and Lambda . Mandatory experience working with AWS Redshift , including stored procedures and performance tuning. Experience building end-to-end data platforms on AWS . Proficiency in Python , especially working with data frames and writing testable, production-grade code. Familiarity with orchestration tools like Airflow or AWS Step Functions . Excellent problem-solving skills and a collaborative mindset. Strong verbal and written communication and stakeholder management abilities. Nice to Have Experience with CI/CD for data pipelines. Knowledge of AWS Lake Formation and Data Governance practices.
Posted 1 month ago
5.0 - 8.0 years
15 - 18 Lacs
Hyderabad, Bengaluru
Hybrid
Cloud and AWS Expertise: In-depth knowledge of AWS services related to data engineering: EC2, S3, RDS, DynamoDB, Redshift, Glue, Lambda, Step Functions, Kinesis, Iceberg, EMR, and Athena. Strong understanding of cloud architecture and best practices for high availability and fault tolerance. Data Engineering Concepts: Expertise in ETL/ELT processes, data modeling, and data warehousing. Knowledge of data lakes, data warehouses, and big data processing frameworks like Apache Hadoop and Spark. Proficiency in handling structured and unstructured data. Programming and Scripting: Proficiency in Python, Pyspark and SQL for data manipulation and pipeline development. Expertise in working with data warehousing solutions like Redshift.
Posted 1 month ago
4.0 - 9.0 years
12 - 22 Lacs
Gurugram
Work from Office
To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 1 month ago
4.0 - 9.0 years
12 - 22 Lacs
Gurugram, Bengaluru
Work from Office
To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 1 month ago
2.0 - 7.0 years
12 - 18 Lacs
Vadodara
Work from Office
- Experience of implementing and delivering data solutions and pipelines on AWS Cloud Platform. Design/ implement, and maintain the data architecture for all AWS data services - A strong understanding of data modelling, data structures, databases (Redshift), and ETL processes - Work with stakeholders to identify business needs and requirements for data-related projects - Strong SQL and/or Python or PySpark knowledge - Creating data models that can be used to extract information from various sources & store it in a usable format - Optimize data models for performance and efficiency - Write SQL queries to support data analysis and reporting - Monitor and troubleshoot data pipelines - Collaborate with software engineers to design and implement data-driven features - Perform root cause analysis on data issues - Maintain documentation of the data architecture and ETL processes - Identifying opportunities to improve performance by improving database structure or indexing methods - Maintaining existing applications by updating existing code or adding new features to meet new requirements - Designing and implementing security measures to protect data from unauthorized access or misuse - Recommending infrastructure changes to improve capacity or performance - Experience in Process industry
Posted 1 month ago
8.0 - 13.0 years
20 - 32 Lacs
Bengaluru
Hybrid
Job Title: Senior Data Engineer Experience: 9+ Years Location: Whitefield, Bangalore Notice Period: Serving or Immediate joiners. Role & Responsibilities: Design and implement scalable data pipelines for ingesting, transforming, and loading data from diverse sources and tools. Develop robust data models to support analytical and reporting requirements. Automate data engineering processes using appropriate scripting languages and frameworks. Collaborate with engineers, process managers, and data scientists to gather requirements and deliver effective data solutions. Serve as a liaison between engineering and business teams on all data-related initiatives. Automate monitoring and alerting for data pipelines, products, and dashboards; provide support for issue resolution including on-call responsibilities. Write optimized and modular SQL queries, including view and table creation as required. Define and implement best practices for data validation, ensuring alignment with enterprise standards. Manage QA data environments, including test data creation and maintenance. Qualifications: 9+ years of experience in data engineering or a related field. Proven experience with Agile software development practices. Strong SQL skills and experience working with both RDBMS and NoSQL databases. Hands-on experience with cloud-based data warehousing platforms such as Snowflake and Amazon Redshift . Proficiency with cloud technologies, preferably AWS . Deep knowledge of data modeling , data warehousing , and data lake concepts. Practical experience with ETL/ELT tools and frameworks. 5+ years of experience in application development using Python , SQL , Scala , or Java . Experience in working with real-time data streaming and associated platforms. Note: The professional should be based out of Bangalore, as one technical round has to be taken F2F from Bellandur, Bangalore office.
Posted 1 month ago
6.0 - 10.0 years
10 - 15 Lacs
Noida, Kolkata, Bengaluru
Hybrid
We are Hiring for IT Professionals , for more details please refer below job descriptions; Job Type: Contractual Location : PAN India Candidates from any city in India are welcome to apply. Client : MNC Position 1 : Oracle DBA with AWS & Redshift Experience Experience Required: 6+ Years Employment Type: Contract Work Mode: Hybrid Location: PAN India Job Summary: We are seeking a highly skilled and experienced Oracle Database Administrator (DBA) with strong expertise in AWS cloud services and Amazon Redshift . The ideal candidate will be responsible for managing, monitoring, and optimizing database environments to ensure high availability, performance, and security in a hybrid cloud setup. Key Responsibilities: Administer and maintain Oracle databases (versions 12c/19c and above) Design, implement, and manage cloud-based database solutions using AWS (RDS, EC2, S3, IAM, etc.) Manage and optimize Amazon Redshift data warehouse solutions Perform performance tuning, backup & recovery, patching, and upgrade activities Implement and maintain database security, integrity, and high availability solutions Handle database migrations from on-premise to cloud (AWS) Automate database processes using scripting (Shell, Python, etc.) Collaborate with development, DevOps, and infrastructure teams Monitor system health, performance, and proactively address issues Required Skills: 6+ years of hands-on experience as an Oracle DBA Strong experience with AWS services related to database hosting and management Expertise in Amazon Redshift architecture, performance tuning, and data loading strategies Proficiency in SQL, PL/SQL, and scripting languages Solid understanding of database backup strategies and disaster recovery Experience with tools like Oracle Enterprise Manager, CloudWatch, and other monitoring tools Excellent analytical and troubleshooting skills Preferred Qualifications: AWS Certification (e.g., AWS Certified Database - Specialty, Solutions Architect) Experience with data migration tools and methodologies Familiarity with PostgreSQL or other relational databases is a plus Position 2 : Flexera Architect Experience: 5+ Years Employment Type: Contract Work Mode: Hybrid Location: PAN India Key Responsibilities: Architect and implement Flexera solutions including Flexera One, FlexNet Manager Suite, and Admi Studio. Collaborate with IT, procurement, and business teams to understand software asset life cycle requirements and translate them into Flexera-based solutions. Optimize software usage and licensing costs through in-depth license position analysis, usage tracking, and compliance reporting. Define policies and workflows for Software Asset Management (SAM) and drive adoption across the organisation. Develop and maintain integration with CMDB, SCCM, ServiceNow, and other ITSM/ITOM tools. Design and deliver dashboards, reports, and KPIs using Flexeras analytics tools. Ensure compliance with software vendor audits and licensing requirements. Provide subject matter expertise during audits, renewals, and true-ups for vendors such as Microsoft, Oracle, IBM, Adobe, etc. Train internal stakeholders and support teams on Flexera tools and SAM practices. Troubleshoot and resolve issues related to Flexera configurations, agents, and data collection. Required Skills & Qualifications: Bachelors degree in Computer Science, Information Technology, or a related field. Minimum 5 years of hands-on experience with Flexera products (Flexera One / FlexNet Manager Suite). Strong understanding of software licensing models (perpetual, subscription, cloud, user/device-based). Experience with ITAM/SAM processes and best practices. Proficient in software discovery, normalization, and reconciliation. Familiarity with integration of Flexera with tools like SCCM, ServiceNow, Tanium, or JAMF. Strong analytical, problem-solving, and communication skills. Experience with scripting or automation (PowerShell, SQL) is a plus. Position 3: Senior Micro Focus Specialist Experience Required: 15+ Years Employment Type: Contract Work Mode: Hybrid Location: PAN India Key Responsibilities: Lead end-to-end implementation and optimization of Micro Focus solutions such as ALM/QC, UFT, LoadRunner, Service Manager, Operations Orchestration, and SMAX. Analyze enterprise needs and recommend appropriate Micro Focus tools to address system, service management, performance testing, and automation requirements. Architect and integrate Micro Focus platforms into enterprise ecosystems ensuring seamless interoperability with other ITSM, DevOps, and monitoring tools. Provide hands-on support, upgrades, patching, and performance tuning of existing Micro Focus environments. Create technical documentation, SOPs, and system architecture diagrams. Mentor junior team members and provide leadership in troubleshooting complex system issues. Collaborate with stakeholders to define KPIs, implement monitoring solutions, and ensure SLAs are met. Ensure security and compliance of all Micro Focus solutions with enterprise policies. Act as a Subject Matter Expert (SME) for RFPs, audits, solution proposals, and enterprise digital transformation initiatives. Key Requirements: 15+ years of total IT experience with at least 10 years hands-on experience in Micro Focus tools. Expertise in one or more of the following: Micro Focus ALM/QC, UFT, LoadRunner, SMAX, Service Manager, Operations Bridge, or Data Protector. Experience with scripting languages such as VBScript, JavaScript, or PowerShell. Strong understanding of ITIL processes, service delivery, and ITSM best practices. Prior experience in implementing Micro Focus in hybrid cloud or enterprise environments. Ability to lead teams and manage cross-functional collaboration. Contract Details: Type: Contract (Long-term / Project-based) Location: Open to candidates across India (PAN India) Mode: Hybrid (Combination of remote and on-site, based on project needs) Interested candidate's send their resume to sujoy@prohrstrategies.com Job Type: Contractual Contract length: 12 months
Posted 1 month ago
5.0 - 10.0 years
9 - 19 Lacs
Kolkata, Hyderabad, Pune
Work from Office
• Minimum 5 years + working exp as Databricks Developer • Minimum 3 years + working exp on Redshift, Python, PySpark, and AWS • Associate should hold Databrick Certification and willing to join within 30 Days
Posted 1 month ago
7.0 - 12.0 years
18 - 25 Lacs
Bengaluru
Work from Office
JOB DESCRIPTION Role Expectations: Design, develop, and maintain robust, scalable, and efficient data pipelines Monitor data workflows and systems to ensure reliability and performance Identify and troubleshoot issues related to data flow and database performance Collaborate with cross-functional teams to understand business requirements and translate them into data solutions Continuously optimize existing data processes and architectures. Qualifications: Programming Languages: Proficient in Python and SQL Databases: Strong experience with Amazon Redshift, Aurora, and MySQL Data Engineering: Solid understanding of data warehousing concepts, ETL/ELT processes, and building scalable data pipelines Strong problem-solving and analytical skills Excellent communication and teamwork abilities
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough