Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 9.0 years
0 - 0 Lacs
pune, mumbai city
Remote
Job Description: We are seeking a skilled Data Engineer with 7+ years of experience in data processing, ETL pipelines, and cloud-based data solutions. The ideal candidate will have strong expertise in AWS Glue, Redshift, S3, EMR, and Lambda , with hands-on experience using Python and PySpark for large-scale data transformations. The candidate will be responsible for designing, building, and maintaining scalable data pipelines and systems to support analytics and data-driven decision-making. Additionally, need to have strong expertise in Terraform and Git-based CI/CD pipelines to support infrastructure automation and configuration management. Key Responsibilities: ETL Development & Automation: Design and implement ETL pipelines using AWS Glue and PySpark to transform raw data into consumable formats. Automate data processing workflows using AWS Lambda and Step Functions. Data Integration & Storage: Integrate and ingest data from various sources into Amazon S3 and Redshift. Optimize Redshift for query performance and cost efficiency. Data Processing & Analytics: Use AWS EMR and PySpark for large-scale data processing and complex transformations. Build and manage data lakes on Amazon S3 for analytics use cases. Monitoring & Optimization: Monitor and troubleshoot data pipelines to ensure high availability and performance. Implement best practices for cost optimization and performance tuning in Redshift, Glue, and EMR. Terraform & Git-based Workflows: Design and implement Terraform modules to provision cloud infrastructure across AWS/Azure/GCP. Manage and optimize CI/CD pipelines using Git-based workflows (e.g., GitHub Actions, GitLab CI, Jenkins, Azure DevOps). Collaborate with developers and cloud architects to automate infrastructure provisioning and deployments. Write reusable and scalable Terraform modules following best practices and code quality standards. Maintain version control, branching strategies, and code promotion processes in Git. Collaboration: Work closely with stakeholders to understand requirements and deliver solutions. Document data workflows, designs, and processes for future reference. Must-Have Skills: Strong proficiency in Python and PySpark for data engineering tasks. Hands-on experience with AWS Glue, Redshift, S3, and EMR . Expertise in building, deploying, and optimizing data pipelines and workflows. Solid understanding of SQL and databas optimization techniques. Strong hands-on experience with Terraform , including writing and managing modules, state files, and workspaces. Proficient in CI/CD pipeline design and maintenance using tools like: GitHub Actions / GitLab CI / Jenkins / Azure DevOps Pipelines Deep understanding of Git workflows (e.g., GitFlow, trunk-based development). Experience in serverless architecture using AWS Lambda for automation and orchestration. Knowledge of data modeling, partitioning, and schema design for data lakes and warehouses.
Posted 23 hours ago
7.0 - 10.0 years
0 - 0 Lacs
pune, mumbai city
Remote
Position - AWS Data Engineer Job Description: We are seeking a skilled Data Engineer with 7+ years of experience in data processing, ETL pipelines, and cloud-based data solutions. The ideal candidate will have strong expertise in AWS Glue, Redshift, S3, EMR, and Lambda , with hands-on experience using Python and PySpark for large-scale data transformations. The candidate will be responsible for designing, building, and maintaining scalable data pipelines and systems to support analytics and data-driven decision-making. Additionally, need to have strong expertise in Terraform and Git-based CI/CD pipelines to support infrastructure automation and configuration management. Key Responsibilities: ETL Development & Automation: Design and implement ETL pipelines using AWS Glue and PySpark to transform raw data into consumable formats. Automate data processing workflows using AWS Lambda and Step Functions. Data Integration & Storage: Integrate and ingest data from various sources into Amazon S3 and Redshift. Optimize Redshift for query performance and cost efficiency. Data Processing & Analytics: Use AWS EMR and PySpark for large-scale data processing and complex transformations. Build and manage data lakes on Amazon S3 for analytics use cases. Monitoring & Optimization: Monitor and troubleshoot data pipelines to ensure high availability and performance. Implement best practices for cost optimization and performance tuning in Redshift, Glue, and EMR. Terraform & Git-based Workflows: Design and implement Terraform modules to provision cloud infrastructure across AWS/Azure/GCP. Manage and optimize CI/CD pipelines using Git-based workflows (e.g., GitHub Actions, GitLab CI, Jenkins, Azure DevOps). Collaborate with developers and cloud architects to automate infrastructure provisioning and deployments. Write reusable and scalable Terraform modules following best practices and code quality standards. Maintain version control, branching strategies, and code promotion processes in Git. Collaboration: Work closely with stakeholders to understand requirements and deliver solutions. Document data workflows, designs, and processes for future reference. Must-Have Skills: Strong proficiency in Python and PySpark for data engineering tasks. Hands-on experience with AWS Glue, Redshift, S3, and EMR . Expertise in building, deploying, and optimizing data pipelines and workflows. Solid understanding of SQL and databas optimization techniques. Strong hands-on experience with Terraform , including writing and managing modules, state files, and workspaces. Proficient in CI/CD pipeline design and maintenance using tools like: GitHub Actions / GitLab CI / Jenkins / Azure DevOps Pipelines Deep understanding of Git workflows (e.g., GitFlow, trunk-based development). Experience in serverless architecture using AWS Lambda for automation and orchestration. Knowledge of data modeling, partitioning, and schema design for data lakes and warehouses.
Posted 23 hours ago
7.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
As a Lead Data Engineer with 7-12 years of experience, you will be an integral part of our team, contributing significantly to the design, development, and maintenance of our data infrastructure. Your primary responsibilities will revolve around creating and managing robust data architectures, ETL processes, data warehouses, and utilizing big data and cloud technologies to support our business intelligence and analytics needs. You will lead the design and implementation of data architectures that facilitate data warehousing, integration, and analytics platforms. Developing and optimizing ETL pipelines will be a key aspect of your role, ensuring efficient processing of large datasets and implementing data transformation and cleansing processes to maintain data quality. Your expertise will be crucial in building and maintaining scalable data warehouse solutions using technologies such as Snowflake, Databricks, or Redshift. Additionally, you will leverage AWS Glue and PySpark for large-scale data processing, manage data pipelines with Apache Airflow, and utilize cloud platforms like AWS, Azure, and GCP for data storage, processing, and analytics. Establishing data governance and security best practices, ensuring data integrity, accuracy, and availability, and implementing monitoring and alerting systems are vital components of your responsibilities. Collaborating closely with stakeholders, mentoring junior engineers, and leading data-related projects will also be part of your role. Furthermore, your technical skills should include proficiency in ETL tools like Informatica Power Center, Python, PySpark, SQL, RDBMS platforms, and data warehousing concepts. Soft skills such as excellent communication, leadership, problem-solving, and the ability to manage multiple projects effectively will be essential for success in this role. Preferred qualifications include experience with machine learning workflows, certification in relevant data engineering technologies, and familiarity with Agile methodologies and DevOps practices. Location: Hyderabad Employment Type: Full-time,
Posted 1 day ago
10.0 - 14.0 years
0 Lacs
haryana
On-site
As a Digital Product Engineering company, Nagarro is seeking a talented individual to join our dynamic and non-hierarchical work culture as a Data Engineer. With over 17500 experts across 39 countries, we are scaling in a big way and are looking for someone with 10+ years of total experience to contribute to our team. **Requirements:** - The ideal candidate should possess strong working experience in Data Engineering and Big Data platforms. - Hands-on experience with Python and PySpark is required. - Expertise with AWS Glue, including Crawlers and Data Catalog, is essential. - Experience with Snowflake and a strong understanding of AWS services such as S3, Lambda, Athena, SNS, and Secrets Manager are necessary. - Familiarity with Infrastructure-as-Code (IaC) tools like CloudFormation and Terraform is preferred. - Strong experience with CI/CD pipelines, preferably using GitHub Actions, is a plus. - Working knowledge of Agile methodologies, JIRA, and GitHub version control is expected. - Exposure to data quality frameworks, observability, and data governance tools and practices is advantageous. - Excellent communication skills and the ability to collaborate effectively with cross-functional teams are essential for this role. **Responsibilities:** - Writing and reviewing high-quality code to meet technical requirements. - Understanding clients" business use cases and converting them into technical designs. - Identifying and evaluating different solutions to meet clients" requirements. - Defining guidelines and benchmarks for Non-Functional Requirements (NFRs) during project implementation. - Developing design documents explaining the architecture, framework, and high-level design of applications. - Reviewing architecture and design aspects such as extensibility, scalability, security, design patterns, user experience, and NFRs. - Designing overall solutions for defined functional and non-functional requirements and defining technologies, patterns, and frameworks. - Relating technology integration scenarios and applying learnings in projects. - Resolving issues raised during code/review through systematic analysis of the root cause. - Conducting Proof of Concepts (POCs) to ensure suggested designs/technologies meet requirements. **Qualifications:** - Bachelors or master's degree in computer science, Information Technology, or a related field is required. If you are passionate about Data Engineering, experienced in working with Big Data platforms, proficient in Python and PySpark, and have a strong understanding of AWS services and Infrastructure-as-Code tools, we invite you to join Nagarro and be part of our innovative team.,
Posted 1 day ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a Senior Lead Engineer specializing in Python and Spark within the AWS environment, you will have the crucial responsibility of designing, building, and maintaining robust, scalable, and efficient ETL pipelines. Your primary focus will be on ensuring alignment with data lakehouse architecture on AWS and optimizing workflows using AWS services such as Glue, Glue Data Catalog, Lambda, and S3. Your expertise will play a key role in implementing data quality and governance frameworks to maintain reliable and consistent data processing across the platform. Collaboration with cross-functional teams to gather requirements, provide technical insights, and deliver high-quality data solutions will be essential in your role. You will drive the migration of existing data processing workflows to the lakehouse architecture by leveraging Iceberg capabilities and establishing best practices for coding standards, design patterns, and system architecture. Your leadership will extend to technical discussions, mentoring team members, and fostering a culture of continuous learning and innovation. Ensuring that all solutions are secure, compliant, and meet company and industry standards will be a top priority. Key relationships in this role will include interactions with Senior Management and Architectural Group, Development Managers, Team Leads, Data Engineers, Analysts, and Agile team members. Your extensive expertise in Python and Spark, along with strong experience in AWS services, data quality and governance, and scalable architecture, will be crucial for success in this position. Desired skills include familiarity with additional programming languages such as Java, experience with serverless computing paradigms, and knowledge of data visualization or reporting tools for effective stakeholder communication. Certification in AWS or data engineering would be beneficial. A bachelor's degree in Computer Science, Software Engineering, or a related field is helpful for this role, although equivalent professional experience or certifications will also be considered. Joining our team at LSEG means being part of a global financial markets infrastructure and data provider dedicated to driving financial stability, empowering economies, and enabling sustainable growth. Our culture, built on values of Integrity, Partnership, Excellence, and Change, guides our decision-making and actions daily. We value individuality, encourage new ideas, and are committed to sustainability across our global business. You will have the opportunity to contribute to re-engineering the financial ecosystem to support sustainable economic growth and the just transition to net zero. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days, and wellbeing initiatives to ensure a collaborative and inclusive work environment. Please take a moment to review our privacy notice, which outlines how personal information is handled by London Stock Exchange Group (LSEG) and your rights as a data subject. If you are representing a Recruitment Agency Partner, it is essential to ensure that candidates applying to LSEG are aware of this privacy notice.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. You will be part of a team of highly skilled professionals working with cutting-edge technologies. Our purpose is to bring real positive changes in an increasingly virtual world, transcending generational gaps and disruptions of the future. We are seeking AWS Glue Professionals with the following qualifications: - 3 or more years of experience in AWS Glue, Redshift, and Python - 3+ years of experience in engineering with expertise in ETL work with cloud databases - Proficiency in data management and data structures, including writing code for data reading, transformation, and storage - Experience in launching spark jobs in client mode and cluster mode, with knowledge of spark job property settings and their impact on performance - Proficiency with source code control systems like Git - Experience in developing ELT/ETL processes for loading data from enterprise-sized RDBMS systems such as Oracle, DB2, MySQL, etc. - Coding proficiency in Python or expertise in high-level languages like Java, C, Scala - Experience in using REST APIs - Expertise in SQL for manipulating database data, familiarity with views, functions, stored procedures, and exception handling - General knowledge of AWS Stack (EC2, S3, EBS), IT Process Compliance, SDLC experience, and formalized change controls - Working in DevOps teams based on Agile principles (e.g., Scrum) - ITIL knowledge, especially in incident, problem, and change management - Proficiency in PySpark for distributed computation - Familiarity with Postgres and ElasticSearch At YASH, you will have the opportunity to build a career in an inclusive team environment. We offer career-oriented skilling models and leverage technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our workplace is grounded in four principles: - Flexible work arrangements, free spirit, and emotional positivity - Agile self-determination, trust, transparency, and open collaboration - Support for the realization of business goals - Stable employment with a great atmosphere and ethical corporate culture.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an Infoscion, your primary responsibility will be to interface with clients for quality assurance issue resolution and ensure high customer satisfaction. You will play a key role in understanding requirements, creating and reviewing designs, validating architecture, and delivering high levels of service offerings in the technology domain. Participation in project estimation, providing solution delivery inputs, conducting technical risk planning, performing code reviews, and unit test plan reviews will be part of your routine. Leading and guiding teams towards developing optimized high-quality code deliverables, continual knowledge management, and adherence to organizational guidelines and processes will also be essential. Your contribution will be significant in building efficient programs and systems. If you believe you can assist clients in navigating their digital transformation journey effectively, this role is tailored for you. The technical requirements for this position include proficiency in Cloud Platform technologies such as AWS, specifically in Data Analytics using AWS Glue and DataBrew. If you are well-versed in these technologies and are keen on contributing to client success in their digital transformation journey, we invite you to be a part of our team at Infosys.,
Posted 2 days ago
10.0 - 18.0 years
0 Lacs
indore, madhya pradesh
On-site
You should possess a BTech degree in computer science, engineering, or a related field of study, or have 12+ years of related work experience. Additionally, you should have at least 7 years of design and implementation experience with large-scale data-centric distributed applications. It is essential to have professional experience in architecting and operating cloud-based solutions, with a good understanding of core disciplines such as compute, networking, storage, security, and databases. A strong grasp of data engineering concepts like storage, governance, cataloging, data quality, and data modeling is required. Familiarity with various architecture patterns like data lake, data lake house, and data mesh is also important. You should have a good understanding of Data Warehousing concepts and hands-on experience with tools like Hive, Redshift, Snowflake, and Teradata. Experience in migrating or transforming legacy customer solutions to the cloud is highly valued. Moreover, experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, and Data Zone is necessary. A thorough understanding of Big Data ecosystem technologies such as Hadoop, Spark, Hive, and HBase, along with other relevant tools and technologies, is expected. Knowledge in designing analytical solutions using AWS cognitive services like Textract, Comprehend, Rekognition, and Sagemaker is advantageous. You should also have experience with modern development workflows like git, continuous integration/continuous deployment pipelines, static code analysis tooling, and infrastructure-as-code. Proficiency in a programming or scripting language like Python, Java, or Scala is required. Possessing an AWS Professional/Specialty certification or relevant cloud expertise is a plus. In this role, you will be responsible for driving innovation within the Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. You should be capable of leading a technology team, fostering an innovative mindset, and enabling fast-paced deliveries. Adapting to new technologies, learning quickly, and managing high ambiguity are essential skills for this position. You will collaborate with business stakeholders, participate in various architectural, design, and status calls, and showcase good presentation skills when interacting with executives, IT Management, and developers. Furthermore, you will drive technology/software sales or pre-sales consulting discussions, ensure end-to-end ownership of tasks, and maintain high-quality software development with complete documentation and traceability. Fulfilling organizational responsibilities, sharing knowledge and experience with other teams/groups, conducting technical training sessions, and producing whitepapers, case studies, and blogs are also part of this role. The ideal candidate for this position should have 10 to 18 years of experience and be able to reference the job with the number 12895.,
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Senior Data Engineering Architect at Iris Software, you will play a crucial role in leading enterprise-level data engineering projects on public cloud platforms like AWS, Azure, or GCP. Your responsibilities will include engaging with client managers to understand their business needs, conceptualizing solution options, and finalizing strategies with stakeholders. You will also be involved in team building, delivering Proof of Concepts (PoCs), and enhancing competencies within the organization. Your role will focus on building competencies in Data & Analytics, including Data Engineering, Analytics, Data Science, AI/ML, and Data Governance. Staying updated with the latest tools, best practices, and trends in the Data and Analytics field will be essential to drive innovation and excellence in your work. To excel in this position, you should hold a Bachelor's or Master's degree in a Software discipline and have extensive experience in Data architecture and implementing large-scale Data Lake/Data Warehousing solutions. Your background in Data Engineering should demonstrate leadership in solutioning, architecture, and successful project delivery. Strong communication skills in English, both written and verbal, are essential for effective collaboration with clients and team members. Proficiency in tools such as AWS Glue, Redshift, Azure Data Lake, Databricks, Snowflake, and databases, along with programming skills in Spark, Spark SQL, PySpark, and Python, are mandatory competencies for this role. Joining Iris Software offers a range of perks and benefits designed to support your financial, health, and overall well-being. From comprehensive health insurance and competitive salaries to flexible work arrangements and continuous learning opportunities, we are dedicated to providing a supportive and rewarding work environment where your success and happiness are valued. If you are inspired to grow your career in Data Engineering and thrive in a culture that values talent and personal growth, Iris Software is the place for you. Be part of a dynamic team where you can be valued, inspired, and encouraged to be your best professional and personal self.,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a valued member of Infosys Consulting, you will play a crucial role in supporting large Oil & Gas/Utilities prospects by showcasing Infosys" unique value proposition through practical use cases across the value chain. Your responsibilities will include gathering, identifying, and documenting business requirements, as well as creating functional specifications for new systems and processes. Utilizing your expertise in assessing current processes, conducting gap analyses, and designing future processes, you will recommend changes and drive continuous improvement using methodologies such as Six Sigma and Lean. In your role, you will be involved in Technology Project Management, which includes overseeing technology vendors and client stakeholders. You will also manage large projects and programs in a multi-vendor, globally distributed team environment, leveraging Agile principles and DevOps capabilities. Collaboration with the IT Project Management Office will be essential as you support the implementation of client-specific digital solutions, from business case development to IT strategy and tool/software selection. Your expertise in designing and implementing scalable data pipelines, ETL/ELT workflows, and optimized data models across cloud data warehouses and lakes will enable reliable access to high-quality data for business insights and strategic decision-making. You will also be responsible for building and maintaining dashboards, reports, and visualizations using tools like Power BI and Tableau, while conducting deep-dive analyses to evaluate business performance and identify opportunities. Collaboration with business stakeholders to translate strategic objectives into data-driven solutions, defining KPIs, and enabling self-service analytics will be a key aspect of your role. Additionally, you will work closely with client IT teams and business stakeholders to uncover opportunities and derive actionable insights. Participation in internal firm-building activities and supporting sales efforts for new and existing clients through proposal creation and sales presentation facilitation will also be part of your responsibilities. To qualify for this position, you should have at least 3-5 years of experience in data engineering, ideally within the Oil & Gas or Utilities sector. Strong communication skills, both written and verbal, are essential, along with a proven track record in business analysis, product design, or project management. A Bachelor's degree or Full-time MBA/PGDM from Tier 1/Tier 2 B-Schools in India or a foreign equivalent is required. Preferred qualifications include knowledge of digital technologies and agile development practices, as well as the ability to work effectively in a cross-cultural team environment. Strong teamwork, communication skills, and the ability to interact with mid-level managers of client organizations are highly valued. This position is preferred to be located in Electronic City, Bengaluru, but other locations such as Hyderabad, Chennai, Pune, Gurgaon, and Chandigarh are also considered based on business needs. Please note that the job may require extended periods of computer work and communication via telephone, email, or face-to-face interactions.,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
chandigarh
On-site
Adventus.io is a B2B2C SaaS-based marketplace supporting institutions, recruiters, and students within the international student placement sector. Our innovative platform allows institutions, recruiters, and students to directly connect with one another, resulting in matching the right international study experience with students across the world. Founded in 2018, we are on a mission to change the way the world accesses international education. Behind the technology, we have over 500 amazingly talented humans making it all happen. We are looking for ambitious self-starters who want to be part of our vision and create a positive legacy. You will work in an agile environment alongside application developers on a vast array of initiatives as we deploy exciting new application features to AWS hosted environments. A portion of your time will be spent assisting the Data Analytics team in building our big data collection and analytics capabilities to uncover customer, product, and operational insights. Collaborate with other Software Engineers & Data Engineers to evaluate and identify optimal cloud architectures for custom solutions. You will design, build, and deploy AWS applications at the direction of other architects including data processing, statistical modeling, and advanced analytics. Design for scale, including systems that auto-scale and auto-heal. Via automation, you will relentlessly strive to eliminate manual toil. Maintain cloud stacks utilized in running our custom solutions, troubleshoot infrastructure-related issues causing solution outage or degradation, and implement necessary fixes. You will implement monitoring tools and dashboards to evaluate health, usage, and availability of custom solutions running in the cloud. Assist with building, testing, and maintaining CI/CD pipelines, infrastructure, and other tools to allow for the speedy deployment and release of solutions in the cloud. Consistently improve the current state by regularly reviewing existing cloud solutions and making recommendations for improvements (such as resiliency, reliability, autoscaling, and cost control), and incorporating modern infrastructure as code deployment practices using tools such as CloudFormation, Terraform, Ansible, etc. Identify, analyze, and resolve infrastructure vulnerabilities and application deployment issues. You will collaborate with our Security Guild members to implement company-preferred security and compliance policies across the cloud infrastructure running our custom solutions. Build strong cross-functional partnerships. This role will interact with business and engineering teams, representing many different types of personalities and opinions. Minimum 4+ years of work experience as a DevOps Engineer building AWS cloud solutions. Strong experience in deploying infrastructure as code using tools like Terraform and CloudFormation. Strong experience working with AWS services like ECS, EC2, RDS, CloudWatch, Systems Manager, EventBridge, ElastiCache, S3, and Lambda. Strong scripting experience with languages like Bash and Python. Understanding of Full Stack development. Proficiency with GIT. Experience in container orchestration (Kubernetes). Implementing CI/CD pipeline in the project. Sustained track record of making significant, self-directed, and end-to-end contributions to building, monitoring, securing, and maintaining cloud-native solutions, including data processing and analytics solutions through services such as Segment, BigQuery, and Kafka. Exposure to the art of ETL, automation tools such as AWS Glue, and presentation layer services such as Data Studio and Tableau. Knowledge of web services, API, and REST. Exposure to deploying applications and microservices written in programming languages such as PHP and NodeJS to AWS. A belief in simple solutions (not easy solutions) and can accept consensus even when you may not agree. Strong interpersonal skills, you communicate technical details articulately and have demonstrable creative thinking and problem-solving abilities with a passion for learning new technologies quickly.,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
noida, uttar pradesh
On-site
We are looking for an experienced AI/ML Architect to spearhead the design, development, and deployment of cutting-edge AI and machine learning systems. As the ideal candidate, you should possess a strong technical background in Python and data science libraries, profound expertise in AI and ML algorithms, and hands-on experience in crafting scalable AI solutions. This role demands a blend of technical acumen, leadership skills, and innovative thinking to enhance our AI capabilities. Your responsibilities will include identifying, cleaning, and summarizing complex datasets from various sources, developing Python/PySpark scripts for data processing and transformation, and applying advanced machine learning techniques like Bayesian methods and deep learning algorithms. You will design and fine-tune machine learning models, build efficient data pipelines, and leverage distributed databases and frameworks for large-scale data processing. In addition, you will lead the design and architecture of AI systems, with a focus on Retrieval-Augmented Generation (RAG) techniques and large language models. Your qualifications should encompass 5-7 years of total experience with 2-3 years in AI/ML, proficiency in Python and data science libraries, hands-on experience with PySpark scripting and AWS services, strong knowledge of Bayesian methods and time series forecasting, and expertise in machine learning algorithms and deep learning frameworks. You should also have experience in structured, unstructured, and semi-structured data, advanced knowledge of distributed databases, and familiarity with RAG systems and large language models for AI outputs. Strong collaboration, leadership, and mentorship skills are essential. Preferred qualifications include experience with Spark MLlib, SciPy, StatsModels, SAS, and R, a proven track record in developing RAG systems, and the ability to innovate and apply the latest AI techniques to real-world business challenges. Join our team at TechAhead, a global digital transformation company known for AI-first product design thinking and bespoke development solutions. With over 14 years of experience and partnerships with Fortune 500 companies, we are committed to driving digital innovation and delivering excellence. At TechAhead, you will be part of a dynamic team that values continuous learning, growth, and crafting tailored solutions for our clients. Together, let's shape the future of digital innovation worldwide!,
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non-Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring knowledge of data engineering, cloud infrastructure and platform engineering, platform operations, and production support using ground-breaking cloud and big data technologies. The ideal candidate with 6-8 years of experience will possess strong technical skills, an eagerness to learn, a keen interest on 3 key pillars that our team supports i.e. Financial Crime, Financial Risk, and Compliance technology transformation, the ability to work collaboratively in a fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skill sets as a foundation. In this role, you will: - Ingest and provision raw datasets, enriched tables, and/or curated, re-usable data assets to enable a variety of use cases. - Drive improvements in the reliability and frequency of data ingestion, including increasing real-time coverage. - Support and enhance data ingestion infrastructure and pipelines. - Design and implement data pipelines that collect data from disparate sources across the enterprise and external sources and deliver it to our data platform. - Extract Transform and Load (ETL) workflows, using both advanced data manipulation tools and programmatically manipulate data throughout our data flows, ensuring data is available at each stage in the data flow and in the form needed for each system, service, and customer along said data flow. - Identify and onboard data sources using existing schemas and, where required, conduct exploratory data analysis to investigate and provide solutions. - Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must-Have Skills: - 3-8 years of expertise in designing and implementing data warehouses, data lakes using Oracle Tech Stack (ETL: ODI, SSIS, DB: PLSQL, and AWS Redshift). - At least 4+ years of experience in managing data extraction, transformation, and loading various sources using Oracle Data Integrator with exposure to other tools like SSIS. - At least 4+ years of experience in Database Design and Dimension modeling using Oracle PLSQL, Microsoft SQL Server. - Experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc. Should implement reusability, parameterization workflow design, etc. - Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (Oracle, SQL Server, Neo4J). - Strong analytical and critical thinking skills, with the ability to identify and resolve issues in data pipelines and systems. - Expertise in data modeling and DB Design with skills in performance tuning. - Experience with OLAP, OLTP databases, and data structuring/modeling with an understanding of key data points. - Experience building and optimizing data pipelines on Azure Databricks or AWS Glue or Oracle Cloud. - Create and Support ETL Pipelines and table schemas to facilitate the accommodation of new and existing data sources for the Lakehouse. - Experience with data visualization (Power BI/Tableau) and SSRS. Good to Have: - Experience working in Financial Crime, Financial Risk, and Compliance technology transformation domains. - Certification on any cloud tech stack preferred Microsoft Azure. - In-depth knowledge and hands-on experience with data engineering, Data Warehousing, and Delta Lake on-prem (Oracle RDBMS, Microsoft SQL Server) and cloud (Azure or AWS or Oracle Cloud). - Ability to script (Bash, Azure CLI), Code (Python, C#), query (SQL, PLSQL, T-SQL) coupled with software versioning control systems (e.g., GitHub) AND ci/cd systems. - Design and development of systems for the maintenance of the Azure/AWS Lakehouse, ETL process, business Intelligence, and data ingestion pipelines for AI/ML use cases. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 3 days ago
4.0 - 8.0 years
10 - 20 Lacs
Hyderabad
Work from Office
About Company: Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating a tangible impact of enterprises and society. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. Please click here to know more about us https://www.capgemini.com/in-en/careers Position Overview We are seeking a versatile and experienced Software Engineer to join our dynamic technology team in Hyderabad, India. The ideal candidate will have expertise in AWS Cloud services, Glue, C#/Python, APIC, React UI, Tableau, and PostgreSQL. The role involves building, enhancing, and maintaining cloud-based solutions, data pipelines, APIs, and user interfaces. Key Responsibilities Design, develop, and maintain cloud-native applications on AWS Cloud. Develop and manage data pipelines and ETL processes using AWS Glue. Implement APIs using APIC (API Connect) and develop backend services in C# or Python. Build interactive and responsive user interfaces using React. Develop and support data visualization dashboards using Tableau. Design and optimize database solutions using PostgreSQL. Collaborate with cross-functional teams including data engineers, UI/UX designers, and business analysts. Ensure application security, scalability, and performance optimization. Participate in Agile development processes and ceremonies. Document technical designs, solutions, and best practices. Required Skills & Experience 4-8 years of experience in software engineering with cloud, data, and UI development. Proficiency in AWS Cloud services, particularly Glue for ETL and data pipelines. Strong programming skills in C# and/or Python. Experience in developing and managing APIs using APIC (API Connect). Hands-on experience with React for front-end development. Proficiency in data visualization using Tableau. Strong knowledge of PostgreSQL database design and optimization. Familiarity with Agile methodologies. Good analytical, problem-solving, and communication skills. What We Offer Opportunity to work on a diverse technology stack including cloud, data, APIs, and UI. Collaborative and Agile work environment. Competitive compensation and benefits. Continuous learning and professional development opportunities.
Posted 3 days ago
5.0 - 7.0 years
10 - 15 Lacs
Chennai
Hybrid
Designation - Module Leader Role - ETL Developer Location - Chennai Notice Period - Immediate to 30 days Experience range b/w 5 - 7 years of Development experience in the Amazon Cloud Environment AWS (S3, AWS Glue, Amazon Redshift, Data Lake Experience in SSRS Expereince in SSIS Experience in ETL Experience in Power Bi Experience in Aws glue Create ETL jobs using Python/PySpark to fulfill the requirements. Ability to perform data manipulations, load, extract from several sources of data into another schema. Good experience with project management practices, proficiency with Agile and Waterfall methodologies and working with scrum teams and timely reporting. Experience with software development Life cycle and all the phases. 7 plus years Database development experience. Understanding of core AWS services, and basic AWS architecture best practices. AWS Technologies S3, AWS Glue, RDS, lambda, cloud watch, etc. Troubleshoot and resolve issues related to data quality, performance, and reliability. Document ETL processes and workflows for future reference and be able to demo completed demo. Optimize and maintain existing ETL processes to ensure high performance and efficiency. Strong analytical and collaboration skills and a team player. Excellent problem-solving and troubleshooting skills. Self-starter and be able to learn and adopt quickly. Strong verbal and written communication skills with an ability to understand frontend users requirements. Note: Work timings 1pm - 11pm Interested Candidates can also share their updated resume at megha.chattopadhyay@aspiresys.com
Posted 3 days ago
5.0 - 7.0 years
11 - 16 Lacs
Bengaluru
Work from Office
As part of the Data and Technology Services practice, you will be responsible for designing and implementing scalable data solutions for major global financial services clients, leveraging AWS services within Acuitys Center of Excellence framework adhering to cloud-native best practices and security standards. Global technology megatrends, regulatory developments, competitive landscape and rise of alternative data are changing the way capital markets operate. At the Data and Technology services operations, we partner with some of the largest financial services firms and corporations in this transformative journey. We are looking for professionals who are hands on, passionate about the work and have the ambition to drive disruptive changes to global business models. Desired skills and experience B.Tech \ MCA in Computer Science from reputed College\University. AWS Certified SysOps Administrator or AWS Cloud Practitioner. Strong understanding of MDM principles on data modeling, hierarchy management, survivorship, golden record creation, and data stewardship. Knowledge of monitoring and logging tools such as CloudWatch. Experience with containerization tools like Docker and Kubernetes on AWS. Expertise in transforming complex datasets into intuitive visual stories using Power BI and Tableau. Key Responsibilities Deploy and manage AWS services (EC2, S3, RDS, Lambda, VPC, Cloudwatch, etc.) to meet business needs. Design and implement scalable MDM solutions on AWS using services such as: AWS Glue for ETL and data cataloging Amazon Redshift for data warehousing Amazon S3 for data lake storage AWS Lake Formation for data governance Amazon RDS/Aurora for transactional master data storage Amazon QuickSight for reporting and visualization Develop and maintain data pipelines to ingest, cleanse, match, merge, and publish master data entities Monitor, maintain, and optimize cloud infrastructure to ensure high availability, performance and costs. Collaborate with data stewards and governance teams to define and enforce MDM policies and standards. Implement security measures and perform regular security assessments in the AWS environment. Automate cloud deployments using infrastructure as code (e.g., CloudFormation, Terraform). Experience with commercial or open-source MDM platforms (e.g., Semarchy, Informatica MDM). Collaborate with development teams to integrate AWS services into application workflows. Proficient in developing interactive dashboards and reports using Power BI and Tableau for data-driven decision-making. Assist in troubleshooting and resolving technical issues related to AWS infrastructure. Monitor backups and ensure disaster recovery readiness. Maintain documentation of AWS environments, configurations, and processes. Required Qualifications 5+ years of experience in data engineering or cloud engineering roles. 3+ years of hands-on experience working with AWS cloud services. Strong understanding of MDM concepts, data modeling, and data governance. 2+ years of experience with Windows\Linux Operating System administration. Troubleshoot OS issues on EC2 instances. Experience in shell scripting. 2+ years of experience with Databases - Relational\Non-Relational. 1+ years of experience in data visualization tools like PowerBI/Tableau Understanding of networking concepts, including VPN, VPC peering, and security groups. Experience with CI/CD pipelines and DevOps practices in AWS. Familiarity with data cataloging tools and metadata management. Good troubleshooting and problem-solving skills. Behavioral Competencies Efficiently lead client calls on a daily basis Be proactive and independent, able to work on projects with minimum inputs from senior stakeholders Evaluate and ensure quality of deliverables within defined timelines Experience of working with financial services clients Strong communication and collaboration skills.
Posted 3 days ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR
Hybrid
Ready to shape the future of work? At Genpact, we dont just adapt to changewe drive it. AI and digital innovation are redefining industries, and were leading the charge. Genpacts AI Gigafactory , our industry-first accelerator, is an example of how were scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team thats shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Senior Principal Consultant, AWS DataLake! Responsibilities Having knowledge on DataLake on AWS services with exposure to creating External Tables and spark programming. The person shall be able to work on python programming. Writing effective and scalable Python codes for automations, data wrangling and ETL. ¢ Designing and implementing robust applications and work on Automations using python codes. ¢ Debugging applications to ensure low-latency and high-availability. ¢ Writing optimized custom SQL queries ¢ Experienced in team and client handling ¢ Having prowess in documentation related to systems, design, and delivery. ¢ Integrate user-facing elements into applications ¢ Having the knowledge of External Tables, Data Lake concepts. ¢ Able to do task allocation, collaborate on status exchanges and getting things to successful closure. ¢ Implement security and data protection solutions ¢ Must be capable of writing SQL queries for validating dashboard outputs ¢ Must be able to translate visual requirements into detailed technical specifications ¢ Well versed in handling Excel, CSV, text, json other unstructured file formats using python. ¢ Expertise in at least one popular Python framework (like Django, Flask or Pyramid) ¢ Good understanding and exposure on any Git, Bamboo, Confluence and Jira. ¢ Good in Dataframes and SQL ANSI using pandas. ¢ Team player, collaborative approach and excellent communication skills Qualifications we seek in you! Minimum Qualifications ¢BE/B Tech/ MCA ¢Excellent written and verbal communication skills ¢Good knowledge of Python, Pyspark Preferred Qualifications/ Skills ¢ Strong ETL knowledge on any ETL tool good to have. ¢ Good to have knowledge on AWS cloud and Snowflake. ¢ Having knowledge of PySpark is a plus. Why join Genpact? Be a transformation leader Work at the cutting edge of AI, automation, and digital innovation Make an impact Drive change for global enterprises and solve business challenges that matter Accelerate your career Get hands-on experience, mentorship, and continuous learning opportunities Work with the best Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Lets build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 3 days ago
3.0 - 7.0 years
4 - 8 Lacs
Bengaluru
Work from Office
About ProCogia: Were a diverse, close-knit team with a common pursuit of providing top-class, end-to-end data solutions for our clients In return for your talent and expertise, you will be rewarded with a competitive salary, generous benefits, alongwith ample opportunity for personal development ?Growth mindsetis something we seek in all our new hires and has helped drive much of our recent growth across North America Our distinct approach is to push the limits and value derived from data Working within ProCogias thriving environment will allow you to unleash your full career potential, The core of our culture is maintaining a high level of cultural equality throughout the company Our diversity and differences allow us to create innovative and effective data solutions for our clients, Our Core Values: Trust, Growth, Innovation, Excellence, and Ownership Location: India (Remote) Time Zone: 12pm to 9pm IST Job Description: We are seeking a Senior MLOps Engineer with deep expertise in AWS CDK, MLOps, and Data Engineering tools to join a high-impact team focused on building reusable, scalable deployment pipelines for Amazon SageMaker workloads This role combines hands-on engineering, automation, and infrastructure expertise with strong stakeholder engagement skills You will work closely with Data Scientists, ML Engineers, and platform teams to accelerate ML productization using best-in-class DevOps practices, Key Responsibilities: Design, implement, and maintain reusable CI/CD pipelines for SageMaker-based ML workflows, Develop Infrastructure as Code using AWS CDK for scalable and secure cloud deployments, Build and manage integrations with AWS Lambda, Glue, Step Functions, and OpenTable formats (Apache Iceberg, Parquet, etc), Support MLOps lifecycle: model packaging, deployment, versioning, monitoring, and rollback strategies, Use GitLab to manage repositories, pipelines, and infrastructure automation, Enable logging, monitoring, and cost-effective scaling of SageMaker instances and jobs, Collaborate closely with stakeholders across Data Science, Cloud Platform, and Product teams to gather requirements, communicate progress, and iterate on infrastructure designs, Ensure operational excellence through well-tested, reliable, and observable deployments, Required Skills: 2+ years of experience in MLOps, with 4+ years of experience in DevOps or Cloud Engineering, ideally with a focus on machine learning workloads, Hands-on experience with GitLab CI Pipelines, artifact scanning, vulnerability checks, and API management, Experience in Continuous Development, Continuous Integration (CI/CD), and Test-Driven Development (TDD), Experience in building microservices and API architectures using FastAPI, GraphQL, and Pydantic, Proficiency in Python v3 6 or higher and experience with Python frameworks such as Pytest, Strong experience with AWS CDK (TypeScript or Python) for IaC, Hands-on experience with Amazon SageMaker, including pipeline creation and model deployment, Solid command over AWS Lambda, AWS Glue, OpenTable formats (like Iceberg/Parquet), and event-driven architectures, Practical knowledge of MLOps best practices: reproducibility, metadata management, model drift, etc Experience deploying production-grade data and ML systems, Comfortable working in a consulting/client-facing environment, with strong stakeholder management and communication skills Preferred Qualifications: Experience with feature stores, ML model registries, or custom SageMaker containers, Familiarity with data lineage, cost optimization, and cloud security best practices, Background in ML frameworks (TensorFlow, PyTorch, etc), Education: Bachelors or masters degree in any of the following: statistics, data science, computer science, or another mathematically intensive field, ProCogia is proud to be an equal-opportunity employer We are committed to creating a diverse and inclusive workspace All qualified applicants will receive consideration for employment without regard to race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status, Show
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
Noida, Pune, Gurugram
Hybrid
IRIS Software Prominent IT Company is looking for Senior AWS Data Engineer. Please find below Job description and share me your updated resume at Prateek.gautam@irissoftware.com. Role: Senior AWS Data Engineer Location: Pune / Noida / Gurgaon Hybrid : 3 days office , 2 days work from home Job Description: 6 to 10 years of experience in Overall years of experience. Good experience in Data engineering is required. Good experience in AWS, SQL, AWS Glue, PySpark, Airflow, CDK, Redshift is required. Good communications skills is required. About Iris Software Inc. With 4,000+ associates and offices in India, U.S.A. and Canada, Iris Software delivers technology services and solutions that help clients complete fast, far-reaching digital transformations and achieve their business goals. A strategic partner to Fortune 500 and other top companies in financial services and many other industries, Iris provides a value-driven approach - a unique blend of highly-skilled specialists, software engineering expertise, cutting-edge technology, and flexible engagement models. High customer satisfaction has translated into long-standing relationships and preferred-partner status with many of our clients, who rely on our 30+ years of technical and domain expertise to future-proof their enterprises. Associates of Iris work on mission-critical applications supported by a workplace culture that has won numerous awards in the last few years, including Certified Great Place to Work in India; Top 25 GPW in IT & IT-BPM; Ambition Box Best Place to Work, #3 in IT/ITES; and Top Workplace NJ-USA.
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
maharashtra
On-site
The Data Capability Tooling Sr Analyst role is a senior position that requires a seasoned professional with in-depth disciplinary knowledge. You will contribute to the development of new techniques and process improvements within the data analytics and data analysis domain. Your responsibilities will include performing data analytics across various asset classes and building data science/Tooling capabilities within the team. Collaboration with the Enterprise Data team, particularly the front to back leads, will be essential to deliver business priorities effectively. You will be part of the B & I Data Capabilities team within the Enterprise Data, where you will manage the Data quality/Metrics program and implement improved data governance and data management practices across the region. The focus will be on enhancing Citigroup's approach to data risk and meeting regulatory commitments in this field. Key Responsibilities: - Conduct strategic data analysis, identify insights, and make strategic recommendations, while developing data displays to communicate complex analysis effectively. - Conduct, review, analyze, and build/engineer data capability tools and software components. - Develop and articulate business requirements, map current and future state processes, and advise on operational models tailored to each use case. - Utilize Python and analytical tools to build data science capabilities. - Perform complex data analytics on large datasets, including data cleansing, transformation, joins, and aggregation. - Create analytics dashboards using PowerBI/Tableau. - Produce high-quality business data analysis for asset classes, financial products, systems, and reports in a fast-paced environment. - Communicate findings and propose solutions to stakeholders and convert business requirements into technical design documents. - Collaborate with cross-functional teams and manage testing and implementation processes. - Design, implement, integrate, and test new features, contributing to software architecture improvements and suggesting new technologies. - Demonstrate understanding of how development functions integrate within the business and technology landscape, with a strong focus on the banking industry. - Perform other assigned duties and functions with attention to risk assessment and compliance with laws and regulations. Skills & Qualifications: - Experience in Financial Services or Finance IT. - Familiarity with Data Tracing/Data Lineage/Metadata Management Tools. - Proficiency in Python 3.x and ETL methodology. - Knowledge of BI visualization tools like Tableau and PowerBI. - Strong understanding of RDBMS, such as Oracle and MySQL, and ability to write complex SQL queries. - Experience in working with complex data warehouses and automation tools. - Excellent storytelling and communication skills. - 6-10 years of experience in statistical modeling of large datasets. - Process Improvement or Project Management experience. Education: Bachelor's/University degree, master's degree in information systems, Business Analysis, or Computer Science. Citi is an equal opportunity employer that values diversity and inclusivity in the workplace.,
Posted 4 days ago
4.0 - 8.0 years
0 - 0 Lacs
coimbatore, tamil nadu
On-site
You have the opportunity to apply for the position of Senior ETL and Feature Engineer at PrivaSapien, based in Bangalore. PrivaSapien is at the forefront of Privacy Enhancing & Responsible AI Technologies, where you will play a crucial role in setting up the big data ecosystem for the world's first privacy red teaming and blue teaming platform. As an individual contributor, you will work on cutting-edge privacy platform requirements with clients globally, spanning across various industry verticals. Joining as one of the early employees, you will receive a significant ESOP option and collaborate with brilliant minds from prestigious institutions such as IISc and IIMs. Your responsibilities will include developing and maintaining ETL pipelines for processing large-scale datasets, creating a Python connector for ETL applications, and demonstrating proficiency in AWS Glue. You will be involved in ETL pipeline development for AI/ML workloads, orchestrating scaling, and resource management. Additionally, you will work on managing unstructured data tasks, optimizing query performance in SQL databases, and integrating multiple databases into the ETL pipeline within a multi-cloud environment. To be eligible for this role, you should have a minimum of 4 years of hands-on experience in setting up ETL and feature engineering pipelines on cloud or big data ecosystems. Proficiency in Apache Spark, pyspark, Apache Airflow, and AWS Glue is essential, along with expertise in at least one ETL tool. Strong programming skills in Python, familiarity with data manipulation libraries, and experience in handling various data types are required. Furthermore, you should possess knowledge in SQL databases, networking, security, and cloud platforms. The interview process will consist of a technical round with the Director, an assessment, an assessment review round with the Senior Backend person, and an HR round. To apply for this opportunity, you need to register or login on the portal, fill out the application form, clear the video screening, and click on "Apply" to be shortlisted. Your profile will then be shared with the client for the interview round upon selection. At Uplers, our aim is to simplify and expedite the hiring process, assisting talents in finding and applying for relevant contractual onsite opportunities. We provide support for any challenges faced during the engagement and assign a dedicated Talent Success Coach to guide you throughout the process. If you are prepared for a new challenge, a conducive work environment, and an opportunity to elevate your career, seize this chance today. We look forward to welcoming you aboard!,
Posted 4 days ago
6.0 - 10.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You are an experienced Data Engineer with at least 6 years of relevant experience. In this role, you will be working as part of a team to develop Data and Analytics solutions. Your responsibilities will include participating in the development of cloud data warehouses, data as a service, and business intelligence solutions. You should be able to provide forward-thinking solutions in data integration and ensure the delivery of a quality product. Experience in developing Modern Data Warehouse solutions using Azure or AWS Stack is required. To be successful in this role, you should have a Bachelor's degree in computer science & engineering or equivalent demonstrable experience. It is desirable to have Cloud Certifications in Data, Analytics, or Ops/Architect space. Your primary skills should include: - 6+ years of experience as a Data Engineer, with a key/lead role in implementing large data solutions - Programming experience in Scala or Python, SQL - Minimum of 1 year of experience in MDM/PIM Solution Implementation with tools like Ataccama, Syndigo, Informatica - Minimum of 2 years of experience in Data Engineering Pipelines, Solutions implementation in Snowflake - Minimum of 2 years of experience in Data Engineering Pipelines, Solutions implementation in Databricks - Working knowledge of some AWS and Azure Services like S3, ADLS Gen2, AWS Redshift, AWS Glue, Azure Data Factory, Azure Synapse - Demonstrated analytical and problem-solving skills - Excellent written and verbal communication skills in English Your secondary skills should include familiarity with Agile Practices, Version control platforms like GIT, CodeCommit, problem-solving skills, ownership mentality, and a proactive approach rather than reactive. This is a permanent position based in Trivandrum/Bangalore. If you meet the requirements and are looking for a challenging opportunity in the field of Data Engineering, we encourage you to apply before the close date on 11-10-2024.,
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III - Java Full Stack Developer + React + AWS at JPMorgan Chase within the Commercial & Investment Bank team, you'll serve as a member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You'll be responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. Execute software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. Create secure and high-quality production code and maintain algorithms that run synchronously with appropriate systems. Create architectural and design documentation for complex applications, ensuring that the software code development adheres to the specified design constraints. Gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Proactively identify hidden problems and patterns in data and use these insights to drive improvements to coding hygiene and system architecture. Contribute to software engineering communities of practice and events that explore new and emerging technologies. Adds to the team culture of diversity, equality, inclusion, and respect. Required qualifications, capabilities, and skills include formal training or certification on software engineering concepts and 3+ years of proficient applied experience. Hands-on practical experience in system design, application development, testing, and operational stability. Strong experience in Java latest versions, Spring Boot and Spring Framework, JDBC, JUnit. Experience in RDBMS and NOSQL databases. Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.). Proficiency in Java/J2EE and REST APIs, Web Services, and experience in building event-driven Micro Services. Experience in developing UI applications using React and Angular. Working proficiency in developmental toolsets like GIT/Bit Bucket, JIRA, Maven. Proficiency in automation and continuous delivery methods. Strong analytical skills and problem-solving ability. Working knowledge of AWS & certification is a must. Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security. Design, deploy, and manage AWS cloud infrastructure using services such as EC2, S3, RDS, Kubernetes, Terraform, Lambda, and VPC. Working knowledge of AWS Glue, AWS Athena & AWS S3. Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages. Hands-on practical experience delivering system design, application development, testing, and operational stability. Collaborate with development teams to create scalable, reliable, and secure cloud architectures. Preferred qualifications, capabilities, and skills include exposure to the latest Python Libraries. Knowledge with AWS Lake Formation. Familiarity with modern front-end technologies. Experience in big data technologies: Hadoop. Experience on Caching Solutions: Hazelcast and Redis.,
Posted 4 days ago
8.0 - 12.0 years
0 Lacs
maharashtra
On-site
The Business Analyst position at Piramal Critical Care (PCC) within the IT department in Kurla, Mumbai involves acting as a liaison between PCC system users, software support vendors, and internal IT support teams. The ideal candidate is expected to be a technical contributor and advisor to PCC business users, assisting in defining strategic application development and integration to support business processes effectively. Key stakeholders for this role include internal teams such as Supply Chain, Finance, Infrastructure, PPL Corporate, and Quality, as well as external stakeholders like the MS Support team, 3PLs, and Consultants. The Business Analyst will report to the Chief Manager- IT Business Partner. The ideal candidate should hold a B.S. in Information Technology, Computer Science, or equivalent, with 8-10 years of experience in Data warehousing, BI, Analytics, and ETL tools. Experience in the Pharmaceutical or Medical Device industry is required, along with familiarity with large global Reporting tools like Qlik/Power BI, SQL, Microsoft Power Platform, and other related platforms. Knowledge of computer system validation lifecycle, project management tools, and office tools is also essential. Key responsibilities of the Business Analyst role include defining user and technical requirements, leading implementation of Data Warehousing, Analytics, and ETL systems, managing vendor project teams, maintaining partnerships with business teams, and proposing IT budgets. The candidate will collaborate with IT and business teams, manage ongoing business applications, ensure system security, and present project updates to the IT Steering committee. The successful candidate must possess excellent interpersonal and communication skills, self-motivation, proactive customer service attitude, leadership abilities, and a strong service focus. They should be capable of effectively communicating business needs to technology teams, managing stakeholder expectations, and working collaboratively to achieve results. Piramal Critical Care (PCC) is a subsidiary of Piramal Pharma Limited (PPL) and is a global player in hospital generics, particularly Inhaled Anaesthetics. PCC is committed to delivering critical care solutions globally and maintaining sustainable growth for stakeholders. With a wide presence across the USA, Europe, and over 100 countries, PCC's product portfolio includes Inhalation Anaesthetics and Intrathecal Baclofen therapy. PCC's workforce comprises over 400 employees across 16 countries and is dedicated to expanding its global footprint through new product additions in critical care. Committed to corporate social responsibility, PCC collaborates with partner organizations to provide hope and resources to those in need while caring for the environment.,
Posted 4 days ago
6.0 - 10.0 years
20 - 35 Lacs
Pune, Delhi / NCR
Hybrid
Job Description Responsibilities Data Architecture: Develop and maintain the overall data architecture, ensuring scalability, performance, and data quality. AWS Data Services: Expertise in using AWS data services such as AWS Glue, S3, SNS, SES, Dynamo DB, Redshift, Cloud formation, Cloud watch, IAM, DMS, Event bridge scheduler etc. Data Warehousing: Design and implement data warehouses on AWS, leveraging AWS Redshift or other suitable options. Data Lakes: Build and manage data lakes on AWS using AWS S3 and other relevant services. Data Pipelines: Design and develop efficient data pipelines to extract, transform, and load data from various sources. Data Quality: Implement data quality frameworks and best practices to ensure data accuracy, completeness, and consistency. Cloud Optimization: Optimize data engineering solutions for performance, cost-efficiency, and scalability on the AWS cloud. Qualifications Bachelors degree in computer science, Engineering, or a related field. 6-7 years of experience in data engineering roles, with a focus on AWS cloud platforms. Strong understanding of data warehousing and data lake concepts. Proficiency in SQL and at least one programming language ( Python/Pyspark ). Good to have - Experience with any big data technologies like Hadoop, Spark, and Kafka. Knowledge of data modeling and data quality best practices. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a team. Preferred Qualifications AWS data developers with 6-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
AWS Glue is a popular ETL (Extract, Transform, Load) service offered by Amazon Web Services. As businesses in India increasingly adopt cloud technologies, the demand for AWS Glue professionals is on the rise. Job seekers looking to explore opportunities in this field can find a variety of roles across different industries in India.
Here are 5 major cities in India actively hiring for AWS Glue roles: - Bangalore - Mumbai - Delhi - Hyderabad - Pune
The salary range for AWS Glue professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can command salaries in the range of INR 12-18 lakhs per annum.
A typical career path in AWS Glue may look like: - Junior AWS Glue Developer - AWS Glue Developer - Senior AWS Glue Developer - AWS Glue Tech Lead
In addition to AWS Glue expertise, professionals in this field are often expected to have knowledge of: - AWS services like S3, Lambda, and Redshift - Programming languages like Python or Scala - ETL concepts and best practices
As you prepare for AWS Glue job interviews in India, make sure to brush up on your technical skills and showcase your expertise in ETL and AWS services. With the right preparation and confidence, you can land a rewarding career in this growing field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough