Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5 - 8 years
22 - 30 Lacs
Pune, Chennai
Work from Office
Experience: Minimum of 5 years of experience in data engineering, with a strong focus on data pipeline development. At least 2 years of experience leading teams or projects in the healthcare, life sciences, or related domains. Proficiency in Python, with experience in data manipulation libraries. Hands-on experience with AWS Glue, AWS Lambda, S3, Redshift, and other relevant AWS data services. Familiarity with data integration tools, ETL (Extract, Transform, Load) frameworks, and data warehousing solutions. Proven experience working in an onsite-offshore model, managing distributed teams, and coordinating development across multiple time zones.
Posted 4 months ago
6 - 11 years
18 - 33 Lacs
Pune, Bengaluru
Work from Office
Urgent hiring for AWS Data Engineer Experience: 6-18 Years Location: Pune/ Bangalore No of positions: 9 Notice Period: immediate joiner Role & responsibilities Requires 5 to 10 years of experience in data engineering on the AWS platform. Proficiency in Spark/Pyspark/Python/SQL is essential. Familiarity with AWS data stores including S3, RDS, DynamoDB, and AWS Data Lake, having utilized these technologies in previous projects. Knowledge of AWS Services like Redshift, Kinesis Streaming, Glue, Iceberg, Lambda, Athena, S3, EC2, SQS, and SNS. Understanding of monitoring and observability toolsets like CloudWatch and Tivoli Netcool. Basic understanding of AWS networking components: VPC, SG, Subnets, Load Balancers. Collaboration with cross-functional teams to gather technical requirements and deliver high-quality ETL solutions. Strong AWS development experience for data ETL, pipeline, integration, and automation work. Deep understanding of Data & Analytics Solution development lifecycle. Proficient in CI/CD, Jenkins, capable of writing testing scripts and automating processes. Experience with IaC Terraform or CloudFormation, basic knowledge of containers. Familiarity with Bitbucket/Git and experience working in an agile/scrum team. Experience in the Private Bank/Wealth Management domain.
Posted 4 months ago
5 - 10 years
10 - 20 Lacs
Bengaluru
Work from Office
Job Title: Senior Data Engineer Location: Bengaluru, India Experience: 5-10 Years Notice period : Immediate Key Responsibilities Design, develop, and maintain scalable data pipelines for efficient data processing. Build and optimize data storage solutions, ensuring high performance and reliability. Implement ETL processes to extract, transform, and load data from various sources. Work closely with data analysts and scientists to support their data needs. Optimize database structures and ensure data integrity. Develop and manage cloud-based data architectures (AWS, Azure, or Google Cloud). Ensure compliance with data governance and security standards. Monitor and troubleshoot data workflows to maintain system efficiency. Required Skills & Qualifications Strong proficiency in SQL, Python, and R for data processing. Experience with big data technologies like Hadoop, Spark, and Kafka. Hands-on expertise in ETL tools and data warehousing solutions . Deep understanding of database management systems (MySQL, PostgreSQL, MongoDB, etc.). Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Strong problem-solving and communication skills to collaborate with cross-functional teams.
Posted 4 months ago
5 - 10 years
20 - 30 Lacs
Hyderabad
Hybrid
Experience : 5 to 10 Years Location : Hyderabad Notice Period : Immediate to 30 Days Skills Required: 5+ years of experience as a Data Engineer or in a similar role working with large data sets and ELT/ETL processes 7+ years of industry experience in software development Knowledge and practical use of a wide variety of RDBMS technologies such as MySQL, Postgres, SQL Server or Oracle Use of cloud-based data warehouse technologies including Snowflake, AWS RedShift. Strong SQL experience with an emphasis on analytic queries and performance Experience with various NoSQL” technologies such as MongoDB or Elastic Search Familiarity with either native database or external change-data-capture technologies Practical use of various data formats such as CSV, XML, JSON, and Parquet Use of Data flow and transformation tools such as Apache Nifi or Talend Implementation of ELT processes in languages such as Java, Python or NodeJS Use of large, shared data stores such as Amazon S3 or Hadoop File System Thorough and practical use of various Data Warehouse data schemas (Snowflake, Star) If interested please share your updated resume to arampally@jaggaer.com with below details: Total Years of Experience: Years of Experience as Data Engineer: Years of experience in MySQL: Years of Experience in Snowflake, AWS RedShift: Current CTC: Expected CTC: Notice Period:
Posted 4 months ago
6 - 10 years
11 - 21 Lacs
Bengaluru
Hybrid
RESPONSIBILITIES: Choosing the right technologies for our use cases, deploy and operate. Setting up Data stores structured, semi structured and non-structured. Secure data at rest via encryption Implement tool to access securely multiple data sources Implement solutions to run real-time analytics Use container technologies Required Experience & Skills: Experience in one of the following: Elastic Search, Cassandra, Hadoop, Mongo DB Experience in Spark and Presto/Trino Experience with microservice based architectures Experience on Kubernetes Experience of Unix/Linux environments is plus Experience of Agile/Scrum development methodologies is a plus Cloud knowledge a big plus (AWS/GCP) (Kubernetes/Docker) Be nice, respectful, able to work in a team Willingness to learn
Posted 4 months ago
6 - 10 years
10 - 20 Lacs
Hyderabad
Work from Office
We're looking for a Data Engineer to join our team. We need someone who's great at building data pipelines and understands how data works. You'll be using tools like DBT and Snowflake a lot. The most important thing for us is that you've worked with all sorts of data sources , not just files. Think different cloud systems, other company databases, and various online tools. What you'll do: Build and manage how data flows into our system using DBT and storing it in Snowflake . Design how our data is organized so it's easy to use for reports and analysis. Fix any data problems that come up. Connect to and get data from many different places , like: Cloud apps (e.g., Salesforce, marketing tools) Various databases (SQL Server, Oracle, etc.) Streaming data Different file types (CSV, JSON, etc.) Other business systems Help us improve our data setup. What you need: Experience as a Data Engineer . Strong skills with DBT (Data Build Tool). Solid experience with Snowflake . Must have experience working with many different types of data sources, especially cloud systems and other company databases not just files. Good at data modeling (organizing data). Comfortable with SQL . Good at solving problems
Posted 4 months ago
11 - 19 years
25 - 40 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Greetings from Wilco Source a CitiusTech Company!!! Position: Senior Data Engineer Location: Chennai/Hyderabad/Bangalore/Pune/Gurgaon/Noida/Mumbai Job Description: Display depth knowledge on SQL language is a must and Cloud-based technologies. Good understanding of Healthcare and life sciences domain is a must. Patient support domain is nice to have. Ex Novartis, J&J, Pfizer, Sanofi are preferable. Good Data Analysis skills is a must. Experience on Data warehousing concepts, data modelling and metadata management. Design, develop, test, and deploy enterprise-level applications using the Snowflake platform. Display Good communication skills is a must and should be able to provide 4 hours overlap with EST timings (~09:30 PM IST) is a must. Good Understanding of PowerBI. Handson on PowerBI is nice to have.
Posted 4 months ago
5 - 10 years
16 - 31 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Hybrid
Greetings from Accion Labs !!! We are looking for a Sr Data Engineer Location : Bangalore , Mumbai , Pune, Hyderabad, Noida Experience : 5+ years Notice Period : Immediate Joiners/ 15 Days Any references would be appreciated !!! Job Description / Skill set: Python/Spark/PySpark/Pandas SQL AWS EMR/Glue/S3/RDS/Redshift/Lambda/SQS/AWS Step Function/EventBridge Real - time analytics
Posted 4 months ago
10 - 18 years
12 - 22 Lacs
Pune, Bengaluru
Hybrid
Hi, We are hiring for the role of AWS Data Engineer with one of the leading organization for Bangalore & Pune. Experience - 10+ Years Location - Bangalore & Pune Ctc - Best in the industry Job Description Technical Skills PySpark coding skill Proficient in AWS Data Engineering Services Experience in Designing Data Pipeline & Data Lake If interested kindly share your resume at nupur.tyagi@mounttalent.com
Posted 4 months ago
5 - 10 years
9 - 19 Lacs
Bangalore Rural, Bengaluru
Work from Office
Job Summary: We are seeking an experienced Data Engineer with expertise in Snowflake and PLSQL to design, develop, and optimize scalable data solutions. The ideal candidate will be responsible for building robust data pipelines, managing integrations, and ensuring efficient data processing within the Snowflake environment. This role requires a strong background in SQL, data modeling, and ETL processes, along with the ability to troubleshoot performance issues and collaborate with cross-functional teams. Responsibilities: Design, develop, and maintain data pipelines in Snowflake to support business analytics and reporting. Write optimized PLSQL queries, stored procedures, and scripts for efficient data processing and transformation. Integrate and manage data from various structured and unstructured sources into the Snowflake data platform. Optimize Snowflake performance by tuning queries, managing workloads, and implementing best practices. Collaborate with data architects, analysts, and business teams to develop scalable and high-performing data solutions. Ensure data security, integrity, and governance while handling large-scale datasets. Automate and streamline ETL/ELT workflows for improved efficiency and data consistency. Monitor, troubleshoot, and resolve data quality issues, performance bottlenecks, and system failures. Stay updated on Snowflake advancements, best practices, and industry trends to enhance data engineering capabilities. Required Skills: Bachelors degree in Engineering, Computer Science, Information Technology, or a related field. Strong experience in Snowflake, including designing, implementing, and optimizing Snowflake-based solutions. Hands-on expertise in PLSQL, including writing and optimizing complex queries, stored procedures, and functions. Proven ability to work with large datasets, data warehousing concepts, and cloud-based data management. Proficiency in SQL, data modeling, and database performance tuning. Experience with ETL/ELT processes and integrating data from multiple sources. Familiarity with cloud platforms such as AWS, Azure, or GCP is an added advantage. Snowflake certifications (e.g., SnowPro Core, SnowPro Advanced) are a plus. Strong analytical skills, problem-solving abilities, and attention to detail.
Posted 4 months ago
6 - 11 years
11 - 20 Lacs
Hyderabad
Work from Office
We are hiring for Data Engineer for Hyderabad Location. Please find the below Job Description. Role & responsibilities 6+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in Python and PySpark for data processing and transformation tasks. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Deep understanding of ETL concepts and best practices.. Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS.
Posted 4 months ago
4 - 9 years
10 - 20 Lacs
Bangalore Rural, Bengaluru
Hybrid
We are looking for a skilled and detail-oriented Data Engineer to join our growing data team. You will be responsible for building and maintaining scalable data pipelines, optimizing data systems, and ensuring data is clean, reliable, and ready for analysis. Mandatory Skills- Python, AWS (Glue & Lambda), SQL, Pyspark, Any other cloud Key Responsibilities: Design, develop, and maintain robust ETL/ELT pipelines. Work with structured and unstructured data from multiple sources. Build and maintain data warehouse/data lake infrastructure. Ensure data quality, integrity, and governance practices. Collaborate with data scientists, analysts, and other engineers to deliver data solutions. Optimize data workflows for performance and scalability. Monitor and troubleshoot data pipeline issues in real-time. Required Qualifications: Bachelors or master’s )degree in computer science, Engineering, or related field. Strong experience with SQL and relational databases (e.g., PostgreSQL, MySQL). Proficient in Python, Pyspark, Any cloud (Azure or AWS ), Experience with cloud platforms (e.g., AWS, GCP, Azure). Familiarity with data warehousing solutions (e.g., Snowflake, Redshift, Big Query).
Posted 4 months ago
7 - 12 years
13 - 23 Lacs
Hyderabad
Hybrid
Hi, We are hiring for one of our client for c2h role: Designation:Data Engineer Education:Graduate Exp:6+ Location: Hyderabad Skills: AWS,Data Engineer,Python,Scala,Java,Kafka,Data Bricks etc Notice Period:15days/Immediate Joiner Regards, Ashwini.
Posted 4 months ago
7 - 12 years
11 - 21 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Dear Candidates, Looking for Databricks , Data Analytic, data engineer,analytics,data lake,AWS,pyspark with MNC Client. Notice period-immediate/Max 15 days. Location-Hyderbad Please find the below Job description: Proficiency in Dat abricks Unified Data Analytics Platform- Good To Have Skills: Experience with Python (Programming Language) - Strong understanding of data analytics and data processing- Experience in building and configuring applications- Knowledge of software development lifecycle- Ability to troubleshoot and debug applications Additional Information: - The candidate sh.ould have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform. Key Responsibilities: Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions. Build and operate very large data warehouses or data lakes. ETL optimization, designing, coding, & tuning big data processes using Apache Spark. ¢ Build data pipelines & applications to stream and process datasets at low latencies. €¢ Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience: €¢ Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake.€¢ Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery. €¢ Minimum of 2 years of experience years in real time streaming using Kafka/Kinesis€¢ Minimum 4 year of Experience in one or more programming languages Python, Java, Scala.€¢ Experience using airflow for the data pipelines in min 1 project.€¢ 1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Professional Attributes: €¢ Ready to work in B Shift (12 PM €“ 10 PM) €¢ A Client facing skills: solid experience working in client facing environments, to be able to build trusted relationships with client stakeholders.€¢ Good critical thinking and problem-solving abilities €¢ Health care knowledge €¢ Good Communication Skills Educational Qualification: Bachelor of Engineering / Bachelor of Technology Additional Information: Data Engineering, PySpark, AWS, Python Programming Language, Apache Spark, Databricks, Hadoop, Certifications in Databrick or Python or AWS. Additional Information: - The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Hyderabad office- A 15 years full-time education is required Kindly mention above details ..if you are interested in above position. Kindly share your profile-w akdevi@crownsolution.com. Regards, devii
Posted 4 months ago
5 - 10 years
17 - 32 Lacs
Pune
Hybrid
We are hiring for multiple roles in Data Engineering function and welcoming applications at all levels of experience. Location: Pune About bp/team Bp's Technology organization is the central organization for all software and platform development. We build all the technology that powers bps businesses, from upstream energy production to downstream energy delivery to our customers. We have a variety of teams depending on your areas of interest, including infrastructure and backend services through to customer-facing web and native applications. We encourage our teams to adapt quickly by using native AWS and Azure services, including serverless, and enable them to pick the best technology for a given problem. This is meant to empower our software and platform engineers while allowing them to learn and develop themselves. Responsibilities Part of a cross-disciplinary team, working closely with other data engineers, software engineers, data scientists, data managers and business partners. Architects, designs, implements and maintains reliable and scalable data infrastructure to move, process and serve data. Writes, deploys and maintains software to build, integrate, manage, maintain, and quality-assure data at bp. Adheres to and advocates for software engineering standard methodologies (e.g. technical design, technical design review, unit testing, monitoring & alerting, checking in code, code review, documentation) Responsible for deploying secure and well-tested software that meets privacy and compliance requirements; develops, maintains and improves CI / CD pipeline. Responsible for service reliability and following site-reliability engineering best practices: on-call rotations for services they maintain, responsible for defining and maintaining SLAs. Design, build, deploy and maintain infrastructure as code. Containerizes server deployments. Actively contributes to improve developer velocity. Mentors others. Qualifications BS degree or equivalent experience in computer science or related field Deep and hands-on experience designing, planning, building, productionizing, maintaining and documenting reliable and scalable data infrastructure and data products in complex environments Development experience in one or more object-oriented programming languages (e.g. Python, Scala, Java, C#) Sophisticated database and SQL knowledge Experience designing and implementing large-scale distributed data systems Deep knowledge and hands-on experience in technologies across all data lifecycle stages Strong stakeholder management and ability to lead initiatives through technical influence Continuous learning and improvement mindset Desired No prior experience in the energy industry required Travel Requirement Negligible travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Commercial Acumen, Communication, Data Analysis, Data cleansing and transformation, Data domain knowledge, Data Integration, Data Management, Data Manipulation, Data Sourcing, Data strategy and governance, Data Structures and Algorithms (Inactive), Data visualization and interpretation, Digital Security, Extract, transform and load, Group Problem Solving Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bps recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.
Posted 4 months ago
5 - 10 years
15 - 30 Lacs
Noida, Gurugram, Delhi / NCR
Hybrid
Skills: Mandatory: SQL, Python, Databricks, Spark / Pyspark. Good to have: MongoDB, Dataiku DSS, Databricks Exp in data processing using Python/scala Advanced working SQL knowledge, expertise using relational databases Need Early joiners. Required Candidate profile ETL development tools like databricks/airflow/snowflake. Expert in building and optimizing big data' data pipelines, architectures, and data sets. Proficient in Big data tools and ecosystem
Posted 4 months ago
4 - 6 years
14 - 22 Lacs
Chennai
Work from Office
Maintain metadata, data dictionaries, and lineage documentation. Build and maintain scalable data pipelines and ETL processes for BI needs. Optimize data models and ensure clean, reliable, and well-structured data for Power BI. Integrate data Required Candidate profile Strong SQL development , experience with Power BI dataset structuring and integration. Exp in Python for ETL, automation, APIs and scripting for data ingestion. Azure, SSIS, cloud-based data services
Posted 4 months ago
8 - 13 years
12 - 22 Lacs
Gurugram
Work from Office
Data & Information Architecture Lead 8 to 15 years - Gurgaon Summary An Excellent opportunity for Data Architect professionals with expertise in Data Engineering, Analytics, AWS and Database. Location Gurgaon Your Future Employer : A leading financial services provider specializing in delivering innovative and tailored solutions to meet the diverse needs of our clients and offer a wide range of services, including investment management, risk analysis, and financial consulting. Responsibilities Design and optimize architecture of end-to-end data fabric inclusive of data lake, data stores and EDW in alignment with EA guidelines and standards for cataloging and maintaining data repositories Undertake detailed analysis of the information management requirements across all systems, platforms & applications to guide the development of info. management standards Lead the design of the information architecture, across multiple data types working closely with various business partners/consumers, MIS team, AI/ML team and other departments to design, deliver and govern future proof data assets and solutions Design and ensure delivery excellence for a) large & complex data transformation programs, b) small and nimble data initiatives to realize quick gains, c) work with OEMs and Partners to bring the best tools and delivery methods. Drive data domain modeling, data engineering and data resiliency design standards across the micro services and analytics application fabric for autonomy, agility and scale Requirements Deep understanding of the data and information architecture discipline, processes, concepts and best practices Hands on expertise in building and implementing data architecture for large enterprises Proven architecture modelling skills, strong analytics and reporting experience Strong Data Design, management and maintenance experience Strong experience on data modelling tools Extensive experience in areas of cloud native lake technologies e.g. AWS Native Lake Solution onsibilities
Posted 4 months ago
12 - 16 years
25 - 35 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Hybrid
We're looking for an experienced Data Engineer Architect with expertise in AWS technologies, to join our team in India. If you have a passion for analytics and a proven track record of designing and implementing complex data solutions, we want to hear from you. Location : Noida/ Gurgaon/ Bangalore/ Mumbai/ Pune Your Future Employer : Join a dynamic and inclusive organization at the forefront of technology, where your expertise will be valued and your career development will be a top priority. Responsibilities: Designing and implementing robust, scalable data pipelines and architectures using AWS technologies Collaborating with cross-functional teams to understand data requirements and develop solutions to meet business needs Optimizing data infrastructure and processes for improved performance and efficiency Providing technical leadership and mentorship to junior team members, and driving best practices in data engineering Requirements: 12+ years of experience in data engineering, with a focus on AWS technologies Strong proficiency in analytics and data processing tools such as SQL, Spark, and Hadoop Proven track record of designing and implementing large-scale data solutions Experience in leading and mentoring teams, and driving technical best practices Excellent communication skills and ability to collaborate effectively with stakeholders at all levels Whats in it for you: Competitive compensation and benefits package Opportunity to work with cutting-edge technologies and make a real impact on business outcomes Career growth and development in a supportive and inclusive work environment Reach us - If you feel this opportunity is well aligned with your career progression plans, please feel free to reach me with your updated profile at isha.joshi@crescendogroup.in Disclaimer - Crescendo Global is specializes in Senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with an engaging memorable job search and leadership hiring experience. Crescendo Global does not discriminate on the basis of race, religion, color, origin, gender, sexual orientation, age, marital status, veteran status or disability status. Note -We receive a lot of applications on a daily basis so it becomes a bit difficult for us to get back to each candidate. Please assume that your profile has not been shortlisted in case you don't hear back from us in 1 week. Your patience is highly appreciated. Profile keywords : Data Engineer, Architect, AWS, Analytics, SQL, Spark, Hadoop, Kafka, Crescendo Global.
Posted 4 months ago
4 - 9 years
18 - 25 Lacs
Bengaluru
Hybrid
Skill required : Data Engineers- Azure Designation : Sr Analyst/ Consultant Job Location : Bengaluru Qualifications: BE/BTech Years of Experience : 4 - 11 Years OVERALL PURPOSE OF JOB Understand client requirements and build ETL solution using Azure Data Factory, Azure Databricks & PySpark . Build solution in such a way that it can absorb clients change request very easily. Find innovative ways to accomplish tasks and handle multiple projects simultaneously and independently. Works with Data & appropriate teams to effectively source required data. Identify data gaps and work with client teams to effectively communicate the findings to stakeholders/clients. Responsibilities : Develop ETL solution to populate Centralized Repository by integrating data from various data sources. Create Data Pipelines, Data Flow, Data Model according to the business requirement. Proficient in implementing all transformations according to business needs. Identify data gaps in data lake and work with relevant data/client teams to get necessary data required for dashboarding/reporting. Strong experience working on Azure data platform, Azure Data Factory, Azure Data Bricks. Strong experience working on ETL components and scripting languages like PySpark, Python . Experience in creating Pipelines, Alerts, email notifications, and scheduling jobs. Exposure on development/staging/production environments. Providing support in creating, monitoring and troubleshooting the scheduled jobs. Effectively work with client and handle client interactions. Skills Required: Bachelors' degree in Engineering or Science or equivalent graduates with at least 4-11 years of overall experience in data management including data integration, modeling & optimization. Minimum 4 years of experience working on Azure cloud, Azure Data Factory, Azure Databricks. Minimum 3-4 years of experience in PySpark, Python, etc. for data ETL . In-depth understanding of data warehouse, ETL concept and modeling principles. Strong ability to design, build and manage data. Strong understanding of Data integration. Strong Analytical and problem-solving skills. Strong Communication & client interaction skills. Ability to design database to store huge data necessary for reporting & dashboarding. Ability and willingness to acquire knowledge on the new technologies, good analytical and interpersonal skills with ability to interact with individuals at all levels. Interested candidates can reach on Neha 9599788568 neha.singh@mounttalent.com
Posted 4 months ago
4 - 9 years
18 - 25 Lacs
Pune, Gurugram, Chennai
Hybrid
Skill required : Data Engineers- Python Pyspark Designation : Sr Analyst/ Consultant Job Location : Bangalore/Gurgaon/Chennai/Pune/Mumbai Qualifications: BE/BTech Years of Experience : 4 - 11 Years What would you do? You will be aligned with Insights & Intelligence vertical and help us generate insights by leveraging the latest Artificial Intelligence (AI) and Analytics techniques to deliver value to our clients.You will also help us apply your expertise in building world-class solutions, conquering business problems, addressing technical challenges using AI Platforms and technologies. You will be required to utilize the existing frameworks, standards, patterns to create architectural foundation and services necessary for AI applications that scale from multi-user to enterprise-class and demonstrate yourself as an expert by actively blogging, publishing research papers, and creating awareness in this emerging area.You will be working as a part of Data Management team which is accountable for Data Management that includes fully scalable relational database management system. ETL(Extract, Transport and Load) - Set of methods and tools to extract data from outside sources, transform it to fit an organization`s business needs and load it into a target such as the organization`s data warehouse.The Python Programming Language team focuses on building multiple programming paradigms including procedural, object-oriented and functional programming. The team is responsible for writing logical code for different projects and take a constructive and object-orientated approach. What are we looking for? Problem-solving skills,Prioritization of workload ,Commitment to quality PySpark, Python, SQL (Structured Query Language) Roles and Responsibilities In this role, you need to analyze and solve moderately complex problems. You are required to create new solutions, leveraging and, where needed, adapting existing methods and procedures. You are required to understand the strategic direction set by senior management, clearly communicate team goals, deliverables, and keep the team updated on change. Interested candidates can reach on Neha 9599788568 neha.singh@mounttalent.com
Posted 4 months ago
6 - 8 years
5 - 15 Lacs
Hyderabad
Hybrid
CGI is looking for a talented and motivated Data Engineer with strong expertise in Python, Apache Spark, HDFS, and MongoDB to build and manage scalable, efficient, and reliable data pipelines and infrastructure • Youll play a key role in transforming raw data into actionable insights, working closely with data scientists, analysts, and business teams. Core Duties & Responsibilities: Data Pipeline Development Build and maintain scalable, high-performance data pipelines using Python and Apache Spark . Handle ingestion, transformation, and preparation of large-scale datasets from diverse sources. Data Infrastructure Management Manage distributed storage systems like HDFS and MongoDB to ensure reliable and efficient data access. Monitor and tune the performance of data infrastructure for speed, scalability, and availability. Quality & Governance Implement data validation checks and monitoring tools to ensure data accuracy and reliability. Enforce data governance , security, and privacy best practices in all engineering processes. Cross-Functional Collaboration Work closely with data scientists , analysts , and business stakeholders to understand data needs and deliver solutions. Contribute to team discussions, code reviews, and architectural decisions. Code & Process Optimization Write clean, modular , and well-documented code . Debug and optimize workflows, addressing performance bottlenecks as they arise.
Posted 4 months ago
3 - 6 years
10 - 20 Lacs
Gurugram
Work from Office
About ProcDNA: ProcDNA is a global consulting firm. We fuse design thinking with cutting-edge tech to create game-changing Commercial Analytics and Technology solutions for our clients. We're a passionate team of 275+ across 6 offices, all growing and learning together since our launch during the pandemic. Here, you won't be stuck in a cubicle - you'll be out in the open water, shaping the future with brilliant minds. At ProcDNA, innovation isn't just encouraged, it's ingrained in our DNA. What we are looking for: As the Associate Engagement Lead, youll leverage data to unravel complexities, adept at devising strategic solutions that deliver tangible results for our clients. We are seeking an individual who not only possesses the requisite expertise but also thrives in the dynamic landscape of a fast-paced global firm. What youll do Design/implement complex and scalable enterprise data processing and BI reporting solutions. Design, build and optimize ETL pipelines or underlying code to enhance data warehouse systems. Work towards optimizing the overall costs incurred due to system infrastructure, operations, change management etc. Deliver end-to-end data solutions across multiple infrastructures and applications Coach, mentor, and manage a team of junior associates to help them (plan tasks effectively and more). Demonstrate overall client stakeholder and project management skills (drive client meetings, creating realistic project timelines, planning and managing individual and team's task). Assist senior leadership in business development proposals focused on technology by providing SME support. Build strong partnerships with other teams to create valuable solutions Stay up to date with latest industry trends. Must have 3- 5 years of experience in designing/building data warehouses and BI reporting with a B.Tech/B.E background Prior experience of managing client stakeholders and junior team members. A background in managing Life Science clients is mandatory. Proficient in big data processing and cloud technologies like AWS, Azure, Databricks, PySpark, Hadoop etc. Along with proficiency in Informatica is a plus. Extensive hands-on experience in working with cloud data warehouses like Redshift, Azure, Snowflake etc. And Proficiency in SQL, Data modelling, designing ETL pipelines is a must. Intermediate to expert-level proficiency in Python. Proficiency in either Tableau, PowerBI, Qlik is a must. Should have worked on large datasets and complex data modelling projects. Prior experience in business development activities is mandatory. Domain knowledge of the pharma/healthcare landscape is mandatory.
Posted 4 months ago
4 - 9 years
16 - 27 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities 1. Strong experience in AWS Data Engineer 2. Experience in Python/Pyspark 3. Experience in EMR,Glue,athena,Redshift,lamda
Posted 4 months ago
7 - 12 years
10 - 20 Lacs
Bengaluru
Work from Office
Senior. Snowflake Data Engineer Experience- 7+ Years Location- Bangalore Location, Work from the Office, Notice period- Immediate only Mandatory skills- Data Engineer, Snowflake, Data bricks
Posted 4 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |