Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4 - 8 years
10 - 15 Lacs
Pune
Remote
Position: AWS Data Engineer About bluCognition: bluCognition is an AI/ML based start-up specializing in developing data products leveraging alternative data sources and providing servicing support to our clients in financial services sector. Founded in2017, by some very named senior professionals from the financial services industry, the company is headquartered in the US, with the delivery centre based in Pune. We build all our solutions while leveraging the latest technology stack in AI, ML and NLP combined with decades of experience in risk management at some of the largest financial services firms in the world. Our clients are some of the biggest and the most progressive names in the financial services industry. We are entering a significant growth phase and are looking for individuals with entrepreneurial mindset who wants us to join in this exciting journey. https://www.blucognition.com The Role: We are seeking an experienced AWS Data Engineer to design, build, and manage scalable data pipelines and cloud-based solutions. In this role, you will work closely with data scientists, analysts, and software engineers to develop systems that support data-driven decision-making. Key Responsibilities: 1) Design, implement, and maintain robust, scalable, and efficient data pipelines using AWS services. 2) Develop ETL/ELT processes and automate data workflows for real-time and batch data ingestion. 3) Optimize data storage solutions (e.g., S3, Redshift, RDS, DynamoDB) for performance and cost-efficiency. 4) Build and maintain data lakes and data warehouses following best practices for security, governance, and compliance. 5) Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. 6) Monitor, troubleshoot, and improve the reliability and quality of data systems. 7) Implement data quality checks, logging, and error handling in data pipelines. 8) Use Infrastructure as Code (IaC) tools like AWS Cloud Formation or Terraform for environment management. 9) Stay up-to-date with the latest developments in AWS services and big data technologies. Required Qualifications: 1) Bachelors degree in Computer Science, Information Systems, Engineering, or a related field. 2) 4+ years of experience working as a data engineer or in a similar role. 3) Strong experience with AWS services such as: AWS Glue, AWS Lambda, Amazon S3, Amazon Redshift, Amazon RDS, Amazon EMR, AWS Step Functions 4) Proficiency in SQL and Python. 5) Solid understanding of data modeling, ETL processes, and data warehouse architecture. 6) Experience with orchestration tools like Apache Airflow or AWS Managed Workflows. 7) Knowledge of security best practices for cloud environments (IAM, KMS, VPC, etc.). 8) Experience with monitoring and logging tools (CloudWatch, X-Ray, etc.). Preferred Qualifications: 1) Good to have - AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect certification. 2) Experience with real-time data streaming technologies like Kinesis or Kafka. 3) Familiarity with DevOps practices and CI/CD pipelines. 4) Knowledge of machine learning data preparation and MLOps workflows. Soft Skills: 1) Excellent problem-solving and analytical skills. 2) Strong communication skills with both technical and non-technical stakeholders. 3) Ability to work independently and collaboratively in a team environment.
Posted 2 months ago
1 - 4 years
7 - 11 Lacs
Bengaluru
Work from Office
---- What the Candidate Will Do ---- Design, develop, and maintain robust and scalable software solutions Find opportunities for quality improvements in the search stack and lead the entire development lifecycle end-to-end, from architecture design and coding to testing and deployment ---- Basic Qualifications ---- Bachelor's degree in Computer Science 3+ years of professional experience in software development with a track record of increasing responsibility and impact. Experience with Go and Python programming languages Demonstrated experience developing sophisticated backend systems and longer-term ownership of critical backend services and infrastructure. Bias to action and proven track record of getting things done. ---- Preferred Qualifications ---- Master's degree in Computer Science Big Data: Proficiency building data pipelines. Experience in using PySpark at scale with large data sets. Experience with t
Posted 2 months ago
1 - 5 years
12 - 17 Lacs
Hyderabad
Work from Office
Job Area: Information Technology Group, Information Technology Group > IT Data Engineer General Summary: Developer will play an integral role in the PTEIT Machine Learning Data Engineering team. Design, develop and support data pipelines in a hybrid cloud environment to enable advanced analytics. Design, develop and support CI/CD of data pipelines and services. - 5+ years of experience with Python or equivalent programming using OOPS, Data Structures and Algorithms - Develop new services in AWS using server-less and container-based services. - 3+ years of hands-on experience with AWS Suite of services (EC2, IAM, S3, CDK, Glue, Athena, Lambda, RedShift, Snowflake, RDS) - 3+ years of expertise in scheduling data flows using Apache Airflow - 3+ years of strong data modelling (Functional, Logical and Physical) and data architecture experience in Data Lake and/or Data Warehouse - 3+ years of experience with SQL databases - 3+ years of experience with CI/CD and DevOps using Jenkins - 3+ years of experience with Event driven architecture specially on Change Data Capture - 3+ years of Experience in Apache Spark, SQL, Redshift (or) Big Query (or) Snowflake, Databricks - Deep understanding building the efficient data pipelines with data observability, data quality, schema drift, alerting and monitoring. - Good understanding of the Data Catalogs, Data Governance, Compliance, Security, Data sharing - Experience in building the reusable services across the data processing systems. - Should have the ability to work and contribute beyond defined responsibilities - Excellent communication and inter-personal skills with deep problem-solving skills. Minimum Qualifications: 3+ years of IT-related work experience with a Bachelor's degree in Computer Engineering, Computer Science, Information Systems or a related field. OR 5+ years of IT-related work experience without a Bachelor"™s degree. 2+ years of any combination of academic or work experience with programming (e.g., Java, Python). 1+ year of any combination of academic or work experience with SQL or NoSQL Databases. 1+ year of any combination of academic or work experience with Data Structures and algorithms. 5 years of Industry experience and minimum 3 years experience in Data Engineering development with highly reputed organizations- Proficiency in Python and AWS- Excellent problem-solving skills- Deep understanding of data structures and algorithms- Proven experience in building cloud native software preferably with AWS suit of services- Proven experience in design and develop data models using RDBMS (Oracle, MySQL, etc.) Desirable - Exposure or experience in other cloud platforms (Azure and GCP) - Experience working on internals of large-scale distributed systems and databases such as Hadoop, Spark - Working experience on Data Lakehouse platforms (One House, Databricks Lakehouse) - Working experience on Data Lakehouse File Formats (Delta Lake, Iceberg, Hudi) Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
Posted 2 months ago
3 - 6 years
20 - 25 Lacs
Hyderabad
Work from Office
Overview As a member of the Platform engineering team, you will be the key techno functional expert leading and overseeing PepsiCo's Platforms & operations and drive a strong vision for how Platforms engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of Platform engineers who build Platform products for platform optimization and cost optimization and build tools for Platform ops and Data Ops on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As member of the Platform engineering team, you will help in managing platform Governance team that builds frameworks to guardrail the platforms of very large and complex data applications in public cloud environments and directly impact the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Active contributor to cost optimization of platforms and services. Manage and scale Azure Data Platforms to support new product launches and drive Platform Stability and Observability across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for Data Platforms for cost and performance. Responsible for implementing best practices around systems integration, security, performance and Platform management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to production Alize data science models. Define and manage SLAs for Platforms and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 2+ years of overall technology experience that includes at least 4+ years of hands-on software development, Program management, and data engineering 1+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 1+ years of experience in Databricks optimization and performance tuning Experience in managing multiple teams and coordinating with different stakeholders to implement the vision of the team. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).
Posted 2 months ago
11 - 20 years
45 - 75 Lacs
Gurugram
Work from Office
Role & responsibilities Proven success architecting and scaling complex software solutions, Familiarity with interface design. Experience and ability to drive a project/module independently from an execution stand. Prior experience with scalable architecture and distributed processing. Strong Programming expertise in Python, SQL, Scala Hands-on experience on any major big data solutions like Spark, Kafka, Hive. Strong data management skills with ETL, DWH, Data Quality and Data Governance. Hands-on experience on microservices architecture, docker and Kubernetes as orchestration. Experience on cloud-based data stores like Redshift and Big Query. Experience in cloud solution architecture. Experience on architecture of running spark jobs on k8s and optimization of spark jobs. Experience in MLops architecture/tools/orchestrators like Kubeflow, MLflow Experience in logging, metrics and distributed tracing systems (e.g. Prometheus/Grafana/Kibana) Experience in CI/CD using octopus/teamcity/jenkins Interested candidate can share their updated resume at surinder.kaur@mounttalent.com
Posted 2 months ago
5 - 10 years
30 - 35 Lacs
Bengaluru
Work from Office
About The Role What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"™s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"™s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"™s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you"™ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills BASIC QUALIFICATIONS for Data Engineering Manager / Software Development Manager 10+ years of engineering experience most of which is in Data domain 5+ years of engineering team management experience 10+ years of planning, designing, developing and delivering consumer software experience - Experience partnering with product or program management teams 5+ years of experience in managing data engineer, business intelligence engineers and/or data scientists Experience designing or architecting (design patterns, reliability and scaling) of new and existing systems Experience managing multiple concurrent programs, projects and development teams in an Agile environment Strong understanding of Data Platform, Data Engineering and Data Governance Experience partnering with product and program management teams - Experience designing and developing large scale, high-traffic applications PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 2 months ago
8 - 13 years
30 - 32 Lacs
Bengaluru
Work from Office
About The Role Data Engineer -2 (Experience 2-5 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"™s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"™s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"™s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you"™ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 2 months ago
4 - 9 years
16 - 27 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities 1. Strong experience in AWS Data Engineer 2. Experience in Python/Pyspark 3. Experience in EMR,Glue,athena,Redshift,lamda
Posted 2 months ago
10 - 15 years
0 - 0 Lacs
Chennai
Work from Office
About the Role As a Senior Data Engineer you’ll be a core part of our engineering team. You will bring your valuable experience and knowledge, improving the technical quality of our data-focused products. This is a key role in helping us become more mature, deliver innovative new products and unlock further business growth. This role will be part of a newly formed team that will collaborate alongside data team members based in Ireland, USA and India. Following the successful delivery of some fantastic products in 2024, we have embarked upon a data-driven strategy in 2025. We have a huge amount of data and are keen to accelerate unlocking its value to delight our customers and colleagues. You will be tasked with delivering new data pipelines, actionable insights in automated ways and enabling innovative new product features. Reporting to our Team Lead, you will be collaborating with the engineering and business teams. You’ll work across all our brands, helping to shape their future direction. Working as part of a team, you will help shape the technical design of our platforms and solve complicated problems in elegant ways that are robust, scalable, and secure. We don’t get everything right first time, but you will help us reflect, adjust and be better next time around. We are looking for people who are inquisitive, confident exploring unfamiliar problems, and have a passion for learning. We don’t have all the answers and don’t expect you to know everything either. Our team culture is open, inclusive, and collaborative – we tackle goals together. Seeking the best solution to a problem, we actively welcome ideas and opinions from everyone in the team. Our Technologies We are continuously evolving our products and exploring new opportunities. We are focused on selecting the right technologies to solve the problem at hand. We know the technologies we’ll be using in 3 years’ time will probably be quite different to what we’re using today. You’ll be a key contributor to evolving our tech stack over time. Our data pipelines are currently based upon Google BigQuery, FiveTran and DBT Cloud. These involve advanced SQL alongside Python in a variety of areas. We don’t need you to be an expert with these technologies, but it will help if you’re strong with something similar. Your Skills and Experience This is an important role for us as we scale up the team and we are looking for someone who has existing experience at this level. You will have worked with data driven platforms that involve some kind of transaction, such as eCommerce, trading platforms or advertising lead generation. Your broad experience and knowledge of data engineering methods mean you’re able to build high quality products regardless of the language used – solutions that avoid common pitfalls impacting the platform’s technical performance. You can apply automated approaches for tracking and measuring quality throughout the whole lifecycle, through to the production environments. You are comfortable working with complex and varied problems. As a strong communicator, you work well with product owners and business stakeholders. You’re able to influence and persuade others by listening to their views, explaining your own thoughts, and working to achieve agreement. We have many automotive industry experts within our team already and they are eager to teach you everything you need to know for this role. Any existing industry knowledge is a bonus but is not necessary. This is a full-time role based in our India office on a semi-flexible basis. Our engineering team is globally distributed but we’d like you to be accessible to the office for ad-hoc meetings and workshops.
Posted 2 months ago
3 - 8 years
9 - 13 Lacs
Mumbai, Pune
Work from Office
About The Role Shift EST US timings The Python Full-Stack Developer is hands-on with our application implementation for various clients. Individuals will work closely with other team members to design, develop, and deploy integration solutions using Python. Primary Duties and Responsibilities Participate in all phases of the software development lifecycle, including discovery, analysis, requirements definition, solution design, configuration, code development, testing, deployment, and support Collaborate cross-functionally with client technical/non-technical stakeholders to gather and understand the requirements Participate in setting standards for various stages in the project lifecycle Implement data security processes and methods Participate in security reviews of integration landscape and solutions Work on proof of concepts earlier in the project to ensure smooth implementation of the solution Document the artefacts to ensure proper knowledge transfer within the team Lead the team to deliver a high-quality product on a defined schedule Highlight risks and gaps early in the project lifecycle to identify the correct path forward Evaluate new tools and technology to ensure an automated and stable environment Skill & Experience required 5+ years of experience in developing integration solutions for clients using Python Apache Airflow experience (or similar) Experience using Test Driven Development Advanced knowledge of algorithms and data types Experience creating and maintaining large scale ETL jobs Experience working on NO SQL database like Couchbase, Mongo DB Hands-on experience in building solutions with AWS S3, and Redshift (GCP is a plus) Experience in building integration solutions with multiple cloud applications A clear understanding of Integration design patterns and the advantages of implementing them in a complex ecosystem Strong communication skills and experience working with internal and external partners A good knowledge of Agile methodologies and a proven ability to prioritize Nice to Haves CS background or affinities Experience in a high growth technology company Experience in the payments space Experience in consulting or finance
Posted 2 months ago
1 - 4 years
2 - 6 Lacs
Mumbai
Work from Office
About The Role The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Associate Process Manager Roles and responsibilities: Utilize Adobe Analytics to collect, analyze, and interpret data related to website traffic, user behavior, and digital marketing campaigns. Develop and maintain custom reports, dashboards, and visualizations to communicate insights effectively to stakeholders. Collaborate with stakeholders to define key performance indicators (KPIs) and develop measurement frameworks to track and evaluate business performance. Identify opportunities for optimization and improvement across various aspects of the business, including website usability, customer journey, and marketing effectiveness. Conduct in-depth analysis using Python programming language to uncover actionable insights and drive strategic decision-making. Stay abreast of industry trends, emerging technologies, and best practices in digital analytics, data science, and related fields. Manage project timelines, resources, and deliverables to ensure successful execution of analytics initiatives. Lead and mentor a team of analysts, providing guidance and support to drive professional growth and development Technical and Functional Skills: Graduate with a minimum of 3 to 5 years of proven experience with data visualization tools such as Tableau, Power BI, or similar. Should have hands on experience with Adobe Analytics, Reporting, Python. Ability to effectively manage multiple work assignments while being able to shift priorities Domain knowledge of various industries such as Banking, Retail, ecommerce etc. Excellent verbal and written communication skills Strong analytical, quantitative and problem solving skills Ability to effectively manage multiple work assignments while being able to shift priorities.
Posted 2 months ago
2 - 5 years
4 - 8 Lacs
Pune
Work from Office
About The Role The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.
Posted 2 months ago
1 - 4 years
2 - 6 Lacs
Pune
Work from Office
About The Role The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.
Posted 2 months ago
2 - 5 years
4 - 8 Lacs
Pune
Work from Office
About The Role Process Manager - AWS Data Engineer Mumbai/Pune| Full-time (FT) | Technology Services Shift Timings - EMEA(1pm-9pm)|Management Level - PM| Travel Requirements - NA The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Process Manager Roles and responsibilities: Understand clients requirement and provide effective and efficient solution in AWS using Snowflake. Assembling large, complex sets of data that meet non-functional and functional business requirements Using Snowflake / Redshift Architect and design to create data pipeline and consolidate data on data lake and Data warehouse. Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts Understanding data pipelines and modern ways of automating data pipeline using cloud based Testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions Perform data quality testing and assurance as a part of designing, building and implementing scalable data solutions in SQL Technical and Functional Skills: AWS ServicesStrong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc. Programming LanguagesProficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Data WarehousingExperience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. ETL ToolsFamiliarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Database ManagementKnowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Big Data TechnologiesUnderstanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Version ControlProficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Problem-solving Skills: Ability to analyze complex technical problems and propose effective solutions. Communication Skills: Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders. Education and ExperienceTypically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. About eClerx eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. About eClerx Technology eClerxs Technology Group collaboratively delivers Analytics, RPA, AI, and Machine Learning digital technologies that enable our consultants to help businesses thrive in a connected world. Our consultants and specialists partner with our global clients and colleagues to build and implement digital solutions through a broad spectrum of activities. To know more about us, visit https://eclerx.com eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law
Posted 2 months ago
8 - 12 years
20 - 25 Lacs
Gandhinagar
Remote
Requirement : 8+ years of professional experience as a data engineer and 2+ years of professional experience as a senior data engineer Must have strong working experience in Python and its various data analysis packages Pandas / NumPy Must have strong understanding of prevalent cloud ecosystems and experience in one of the cloud platforms AWS / Azure / GCP . Must have strong working experience in one of the leading MPP Databases Snowflake / Amazon Redshift / Azure Synapse / Google Big Query Must have strong working experience in one of the leading data orchestration tools in cloud – Azure Data Factory / Amazon Glue / Apache Airflow Must have experience working with Agile methodologies, Test Driven Development, and implementing CI/CD pipelines using one of leading services – GITLab / Azure DevOps / Jenkins / AWS Code Pipeline / Google Cloud Build Must have Data Governance / Data Management / Data Quality project implementation experience Must have experience in big data processing using Spark Must have strong experience with SQL databases (SQL Server, Oracle, Postgres etc.) Must have stakeholder management experience and very good communication skills Must have working experience on end-to-end project delivery including requirement gathering, design, development, testing, deployment, and warranty support Must have working experience with various testing levels, such as, unit testing, integration testing and system testing Working experience with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures Nice to have Skills : Working experience in DataBricks notebooks and managing DataBricks clusters Experience in Data Modelling tool such as Erwin or ER Studio Experience in one of the data architectures, such as Data Mesh or Data Fabric Has handled real time data or near real time data Experience in one of the leading Reporting & analysis tools, such as Power BI, Qlik, Tableau or Amazon Quick Sight Working experience with API integration General insurance / banking / finance domain understanding
Posted 2 months ago
3 - 6 years
20 - 25 Lacs
Hyderabad
Work from Office
Overview Job Title: Senior DevOps Engineer Location: Bangalore / Hyderabad / Chennai / Coimbatore Position: Full-time Department: Annalect Engineering Position Overview Annalect is currently seeking a Senior DevOps Engineer to join our technology team remotely, We are passionate about building distributed back-end systems in a modular and reusable way. We're looking for people who have a shared passion for data and desire to build cool, maintainable and high-quality applications to use this data. In this role you will participate in shaping our technical architecture, design and development of software products, collaborate with back-end developers from other tracks, as well as research and evaluation of new technical solutions. Responsibilities Key Responsibilities: Build and maintain cloud infrastructure through terraform IaC. Cloud networking and orchestration with AWS (EKS, ECS, VPC, S3, ALB, NLB). Improve and automate processes and procedures. Constructing CI/CD pipelines. Monitoring and handling incident response of the infrastructure, platforms, and core engineering services. Troubleshooting infrastructure, network, and application issues. Help identify and troubleshoot problems within environment. Qualifications Required Skills 5 + years of DevOps experience 5 + years of hands-on experience in administering cloud technologies on AWS, especially with IAM, VPC, Lambda, EKS, EC2, S3, ECS, CloudFront, ALB, API Gateway, RDS, Codebuild, SSM, Secret Manager, Lambda, API Gateway etc. Experience with microservices, containers (Docker), container orchestration (Kubernetes). Demonstrable experience of using Terraform to provision and configure infrastructure. Scripting ability - PowerShell, Python, Bash etc. Comfortable working with Linux/Unix based operating systems (Ubuntu preferred) Familiarity with software development, CICD and DevOps tools (Bitbucket, Jenkins, GitLab, Codebuild, Codepipeline) Knowledge of writing Infrastructure as Code (laC) using Terraform. Experience with microservices, containers (Docker), container orchestration (Kubernetes), serverless computing (AWS Lambda) and distributed/scalable systems. Possesses a problem-solving attitude. Creative, self-motivated, a quick study, and willing to develop new skills. Additional Skills Familiarity with working with data and databases (SQL, MySQL, PostgreSQL, Amazon Aurora, Redis, Amazon Redshift, Google BigQuery). Knowledge of Database administration. Experience with continuous deployment/continuous delivery (Jenkins, Bamboo). AWS/GCP/Azure Certification is a plus. Experience in python coding is welcome. Passion for data-driven software. All of our tools are built on top of data and require work with data. Knowledge of laaS/PaaS architecture with good understanding of Infrastructure and Web Application security Experience with logging/monitoring (CloudWatch, Datadog, Loggly, ELK). Passion for writing good documentation and creating architecture diagrams.
Posted 2 months ago
2 - 6 years
12 - 16 Lacs
Bengaluru
Work from Office
Design, develop, and manage our data infrastructure on AWS, with a focus on data warehousing solutions. Write efficient, complex SQL queries for data extraction, transformation, and loading. Utilize DBT for data modelling and transformation. Use Python for data engineering tasks, demonstrating strong work experience in this area. Implement scheduling tools like Airflow, Control M, or shell scripting to automate data processes and workflows.Participate in an Agile environment, adapting quickly to changing priorities and requirements. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proven expertise in AWS technologies, with a strong understanding of AWS services. Experience in Redshift is optional Experience in data warehousing with a solid grasp of SQL, including ability to write complex queries. Proficiency in Python, demonstrating good work experience in data engineering tasks. Familiarity with scheduling tools like Airflow, Control M, or shell scripting. Excellent communication skills, willing attitude towards learning Preferred technical and professional experience Knowledge of DBT for data modelling and transformation is a plus. Experience with PySpark or Spark is highly desirable. Familiarity with DevOps, CI/CD, and Airflow is beneficial.Experience in Agile environments is a nice-to-have
Posted 2 months ago
4 - 8 years
15 - 25 Lacs
Chennai, Bengaluru, Hyderabad
Hybrid
Hands-on We are looking for AWS Data Engineer Permanent Role. Experience : 4 to 8 Years Location : Hyderabad / Chennai/Noida/Pune/Bangalore NP-Immediate - Skills: Expertise in Data warehousing and ETL Design and implementation Hands on experience with Programming language like Python Good understanding of Spark architecture along with internals Hand on experience using AWS services like Glue (Pyspark), Lambda, S3, Athena, Experience on Snowflake is good to have Hands on experience on implementing different loading strategies like SCD1 and SCD2, Table/ partition refresh, insert update, Swap Partitions, Experience in Parallel Loading and Dependencies orchestrations Awareness of scheduling and orchestration tools Experience on RDBMS systems and concepts Expertise in writing and complex SQL queries and developing Database components including creating views, stored procedures, triggers etc. Create test cases and perform unit testing of ETL Jobs
Posted 3 months ago
12 - 17 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Python (Programming Language), Oracle Procedural Language Extensions to SQL (PLSQL), AWS Aurora, PySpark Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring seamless communication within the team and stakeholders. Roles & Responsibilities: Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Expected to provide solutions to problems that apply across multiple teams. Lead the application development process effectively. Ensure seamless communication within the team and stakeholders. Professional & Technical Skills: Must To Have Skills: Proficiency in Python (Programming Language), AWS Aurora, PySpark, Oracle Procedural Language Extensions to SQL (PLSQL). Strong understanding of cloud computing concepts. Experience in designing and implementing scalable applications. Proficient in database management and optimization. Skilled in troubleshooting and problem-solving. Additional Information: The candidate should have a minimum of 12 years of experience in Python (Programming Language). This position is based at our Hyderabad office. A 15 years full-time education is required. Qualification 15 years full time education
Posted 3 months ago
12 - 17 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : Data Engineering, AWS BigData, PySpark Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring successful project delivery. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Expected to provide solutions to problems that apply across multiple teams Lead the application development process Coordinate with stakeholders to gather requirements Ensure timely project delivery Professional & Technical Skills: Must To Have Skills: Proficiency in AWS Glue, Data Engineering, PySpark, AWS BigData Strong understanding of cloud computing principles Experience in designing and implementing data pipelines Knowledge of ETL processes and data transformation Familiarity with data warehousing concepts Additional Information: The candidate should have a minimum of 12 years of experience in AWS Glue This position is based at our Bengaluru office A 15 years full-time education is required Qualification 15 years full time education
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France