Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Job Title β ETL Testing β Python & SQL Candidate Specification β 5+ years, Open for Shift β 1PM to 10 PM. ETL (Python) β all 5 days WFO, ETL (SQL) β Hybrid. Location β Chennai. Job Description Experience in ETL testing or data warehouse testing. Strong in SQL Server, MySQL, or Snowflake. Strong in scripting languages Python. Strong understanding of data warehousing concepts, ETL tools (e.g., Informatica, Talend, SSIS), and data modeling. Proficient in writing SQL queries for data validation and reconciliation. Experience with testing tools such as HP ALM, JIRA, TestRail, or similar. Excellent problem-solving skills and attention to detail. Skills Required RoleETL Testing Industry TypeIT/ Computers - Software Functional AreaITES/BPO/Customer Service Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills ETL PYTHON SQL Other Information Job CodeGO/JC/185/2025 Recruiter NameSheena Rakesh Show more Show less
Posted 4 days ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Job Title : Power BI Developer Location : Chennai/Hyderabad/Bangalore Candidate Specification Any Graduate, Min 6+ years relevant Experience Job Description Strong proficiency in DAX, Power Query (M), and SQL. Experience in data modelling and creating relationships within datasets. Understanding of ETL processes and data warehousing concepts. Skills Required RolePower BI Developer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required Education Graduation Employment TypeFull Time, Permanent Key Skills POWER BI POWER PLATFORM POWER APPS AWS AZURE Other Information Job CodeGO/JC/174/2025 Recruiter NameSheena Rakesh Key Skills POWER BI POWER PLATFORM POWER APPS AWS AZURE Other Information Job CodeGO/JC/174/2025 Recruiter NameSheena Rakesh Show more Show less
Posted 4 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Job Title: Data Engineer Candidate Specification: 5 + years, Immediate to 30 days. (All 5 Days work from office for 9 Hours). Job Description Experience with any modern ETL tools (PySpark or EMR, or Glue or others). Experience in AWS, programming knowledge in python, Java, Snowflake. Experience in DBT, StreamSets (or similar tools like Informatica, Talend), migration work done in the past. Agile experience is required with Version One or Jira tool expertise. Provide hands-on technical solutions to business challenges & translates them into process/ technical solutions. Good knowledge of CI/CD and DevOps principles. Experience in data technologies - Hadoop PySpark / Scala (Any one) Skills Required RoleData Engineer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required Education B Tech Employment TypeFull Time, Permanent Key Skills PYSPARK. EMR GLUE ETL TOOL AWS CI/CD DEVOPS Other Information Job CodeGO/JC/102/2025 Recruiter NameSheena Rakesh Show more Show less
Posted 4 days ago
5.0 - 12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
It was nice visiting your profile in portal, One of our top MNC client has critical job position onArtificial Engineer (AI) for Pune Location Please Apply relevant Profiles Candidates Required Skill:Artificial Engineer (AI) Years of Experience:5 to 12 Years, CTC: Can be discussed Notice Period: Immediate Joiners or 15-20 Days or can be discussed Work Location: Pune Interview: Online Candidates should haveAI Experience Job Description About the Role: In this role, you will be at the forefront of developing and deploying cutting-edge AI solutions that directly impact our business. You will leverage your expertise in data and machine learning engineering, natural language processing (NLP), computer vision, and agentic AI, to build scalable and robust systems that drive innovation and efficiency. You will be responsible for the entire AI lifecycle, from data acquisition and preprocessing to model development, deployment, and monitoring. Responsibilities Data and ML Engineering: Design and implement robust data pipelines to extract, transform, and load (ETL) data from diverse structured and unstructured sources (e.g., databases, APIs, text documents, images, videos). Develop and maintain scalable data storage and processing solutions. Perform comprehensive data cleaning, validation, and feature engineering to prepare data for machine learning models. Build and deploy machine learning models for a variety of business applications, including but not limited to process optimization and enterprise efficiency. Web Scraping and Document Processing: Implement web scraping solutions and utilize document processing libraries to extract and process data from various sources. NLP and Computer Vision: Develop and implement NLP models for tasks such as text classification, sentiment analysis, entity recognition, and language generation. Implement computer vision models for image classification, object detection, and image segmentation. Agentic AI Development Design and develop highly scalable production-ready code for agentic AI systems. Implement and integrate agentic AI solutions into existing workflows to automate complex tasks and improve decision-making. Develop and maintain agentic systems for data wrangling, supply chain optimization, and enterprise efficiency projects. Work with LLMs, and other related technologies to create agentic workflows. Integrate NLP and Computer Vision capabilities into agentic workflows to enhance their ability to understand and interact with diverse data sources. Model Development And Deployment Design and develop machine learning models and algorithms to solve simplified business problems. Evaluate and optimize model performance through rigorous testing and experimentation. Deploy and monitor machine learning models in production environments. Implement best practices for model versioning, reproducibility, and explainability. Optimize and deploy NLP and computer vision models for real-time inference. Communication And Collaboration Clearly articulate complex technical concepts to both technical and non-technical audiences. Demonstrate live coding proficiency and effectively explain your code and design decisions. Collaborate with cross-functional teams, including product managers, data scientists, and software engineers. Document code, models, and processes for knowledge sharing and maintainability. Qualifications Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Natural Language Processing, Computer Vision, or a related field. Proven experience in developing and deploying machine learning models, NLP models, and computer vision models, and data pipelines. Strong programming skills in Python and experience with relevant libraries (e.g., TensorFlow, PyTorch, scikit-learn, pandas, NumPy, Hugging Face Transformers, OpenCV, Pillow). Experience with cloud computing platforms (e.g., AWS, GCP, Azure). Experience with database technologies (e.g., SQL, NoSQL). Experience with agentic AI development and LLMs is highly desirable. Excellent problem-solving and analytical skills. Product Engineering background Ability to demonstrate live coding proficiency. Experience in productionizing ML models. Preferred Qualifications Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes). Experience with MLOps practices and tools. Experience with building RAG systems. Experience with deploying and optimizing models for edge devices. Experience with video processing and analysis. This job is provided by Shine.com Show more Show less
Posted 4 days ago
4.0 - 8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Greetings from TCS! TCS is hiring for ETL Testing Desired Experience Range: 4 to 8 Years Job Location: Gurugram Should be strong in Azure and ETL Testing (Highly Importance), SQL and good Knowledge in Data Warehousing (DWH) Concepts Able to work individually and meet the testing delivery expectation from End to End. Able to analyze the requirement, pro-actively identify the scenarios, co-ordinate with business team and get it clarified. Able to understand, convert and verify the business transformation logic into technical terms Should be willing and ready to put in additional effort to learn SAS Should be willing and ready to put in additional effort to learn Python and Pyspark Thanks Anshika Show more Show less
Posted 4 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Must have Skills: Power BI, dundas bi, tableau, cognos Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field. 7+ years of experience as a Report Writer, BI Developer, or SQL Developer. Advanced proficiency in SQL (MySQL, PostgreSQL, or similar RDBMS). Experience developing and maintaining reports using BI tools such as Dundas BI, Power BI, Tableau, or Cognos. Strong knowledge of data modeling techniques and relational database design. Familiarity with ETL processes, data warehousing concepts, and performance tuning. Exposure to cloud platforms (Azure, AWS) is a plus. Experience working in Agile/Scrum environments. Strong analytical and problem-solving skills Excellent communication skills and ability to work in a team environment. Skills: data warehousing,mysql,problem-solving,agile,aws,performance tuning,etl processes,relational database design,azure,sql,data modeling,report writers,scrum,analytical skills,communication,cognos,report writing,dundas,postgresql,power bi,dundas bi,tableau Show more Show less
Posted 4 days ago
3.0 - 5.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Marsh McLennan is seeking candidates for the following position based in the Pune office. Senior Engineer/Principal Engineer What can you expect? We are seeking a skilled Data Engineer with 3 to 5 years of hands-on experience in building and optimizing data pipelines and architectures. The ideal candidate will have expertise in Spark, AWS Glue, AWS S3, Python, complex SQL, and AWS EMR. What is in it for you? Holidays (As Per the location) Medical & Insurance benefits (As Per the location) Shared Transport (Provided the address falls in service zone) Hybrid way of working Diversify your experience and learn new skills Opportunity to work with stakeholders globally to learn and grow We will count on you to: Design and implement scalable data solutions that support our data-driven decision-making processes. What you need to have: SQL and RDBMS knowledge - 5/5. Postgres. Should have extensive hands-on Database systems carrying tables, schema, views, materialized views. AWS Knowledge. Core and Data engineering services. Glue/ Lambda/ EMR/ DMS/ S3 - services in focus. ETL data:dge :- Any ETL tool preferably Informatica. Data warehousing. Big data:- Hadoop - Concepts. Spark - 3/5 Hive - 5/5 Python/ Java. Interpersonal skills:- Excellent communication skills and Team lead capabilities. Understanding of data systems well in big organizations setup. Passion deep diving and working with data and delivering value out of it. What makes you stand out? Databricks knowledge. Any Reporting tool experience. Preferred MicroStrategy. Marsh McLennan (NYSE: MMC) is the worldβs leading professional services firm in the areas of risk, strategy and people. The Companyβs more than 85,000 colleagues advise clients in over 130 countries. With annual revenue of $23 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses. Marsh provides data-driven risk advisory services and insurance solutions to commercial and consumer clients. Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and well being for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com, or follow us on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one βanchor dayβ per week on which their full team will be together in person. Marsh McLennanο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώ (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercerο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώ and Oliver Wymanο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώ. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.comο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώ, or follow on ο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»ΏLinkedInο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώ and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one βanchor dayβ per week on which their full team will be together in person. R_299578 Show more Show less
Posted 4 days ago
6.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description Job Title : Data Engineer Location : Chennai Candidate Specification Any Graduate, Min 6+ years relevant Experience Job Description The role involves spinning up and managing AWS data infrastructure and building data ingestion pipelines in StreamSets and EMR. Candidates must have working experience with any modern ETL tools (PySpark or EMR, or Glue or others). Skills Required RoleData Engineer - Chennai Industry Type Functional Area Required Education Employment TypeFull Time, Permanent Key Skills DBT SNOWFLAKE AWS ETL INFORMATICA EMR PYSPARK GLUE Other Information Job CodeGO/JC/046/2025 Recruiter Name Key Skills DBT SNOWFLAKE AWS ETL INFORMATICA EMR PYSPARK GLUE Other Information Job CodeGO/JC/046/2025 Recruiter Name Show more Show less
Posted 4 days ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
Schneider Digital is seeking an Informatica developer who will be an excellent addition to the forBusiness Intelligence team. Responsibilities: This role is for BI KPI portal operations and Development. Domains including: Data Integration, Data Extraction from Legacy systems, Data Warehousing, efficient in Extract/Load/Transform (ETL)workflows (Informatica PowerCenter and Informatica cloud services) Strong experience in design, development and testing of Informatica based applications (PowerCenter10.2 and Informatica cloud services) Should have Strong knowledge of Oracle database, PL/SQL development, UNIX scripting. Should understand the overall system landscape, upstream and downstream systems Excellent knowledge of debugging, tuning and optimizing performance of database queries Good experience in Data Integration: Data Extraction from legacy systems and Load into Redshift and Redshift spectrum. Supports the module in production, resolves hot issues and implement and deploy enhancements to the application/package Should be proficient in the end-to-end software development life cycle including: Requirement Analysis, Design, Development, Code Review, Testing Responsible for ensuring defect-free and on-time delivery Responsible for Issue resolution along with corrective and preventive measures. Should be able to manage diverse set of Stakeholders and report on key project metric / KPIs Lead brainstorming sessions, provide guidance to team members, identify value creation areas, be responsible for quality control Establish standard processes and procedures and promote team collaboration & selfimprovements Should be able to work on agile Methodologies (Jira) Internal Qualification & Education: Graduate Engineering Degree (B.E. / B.Tech) 4+ years of experience working as Informatica, Informatica cloud, Data warehousing and Unix Scripting 4+ years of experience working in Agile teams Demonstrates strong ability to articulate technical concepts and implications to business partners Excellent communication skills Roles and Responsibilities Strong experience in design, development and testing of Informatica based applications (PowerCenter10.2 and Informatica cloud services) Should have Strong knowledge of Oracle database, PL/SQL development, UNIX scripting. Should understand the overall system landscape, upstream and downstream systems Excellent knowledge of debugging, tuning and optimizing performance of database queries Good experience in Data Integration: Data Extraction from legacy systems and Load into Redshift and Redshift spectrum. Supports the module in production, resolves hot issues and implement and deploy enhancements to the application/package
Posted 4 days ago
5.0 - 12.0 years
0 Lacs
Greater Kolkata Area
On-site
It was nice visiting your profile in portal, One of our top MNC client has critical job position onArtificial Engineer (AI) for Pune Location Please Apply relevant Profiles Candidates Required Skill:Artificial Engineer (AI) Years of Experience:5 to 12 Years, CTC: Can be discussed Notice Period: Immediate Joiners or 15-20 Days or can be discussed Work Location: Pune Interview: Online Candidates should haveAI Experience Job Description About the Role: In this role, you will be at the forefront of developing and deploying cutting-edge AI solutions that directly impact our business. You will leverage your expertise in data and machine learning engineering, natural language processing (NLP), computer vision, and agentic AI, to build scalable and robust systems that drive innovation and efficiency. You will be responsible for the entire AI lifecycle, from data acquisition and preprocessing to model development, deployment, and monitoring. Responsibilities Data and ML Engineering: Design and implement robust data pipelines to extract, transform, and load (ETL) data from diverse structured and unstructured sources (e.g., databases, APIs, text documents, images, videos). Develop and maintain scalable data storage and processing solutions. Perform comprehensive data cleaning, validation, and feature engineering to prepare data for machine learning models. Build and deploy machine learning models for a variety of business applications, including but not limited to process optimization and enterprise efficiency. Web Scraping and Document Processing: Implement web scraping solutions and utilize document processing libraries to extract and process data from various sources. NLP and Computer Vision: Develop and implement NLP models for tasks such as text classification, sentiment analysis, entity recognition, and language generation. Implement computer vision models for image classification, object detection, and image segmentation. Agentic AI Development Design and develop highly scalable production-ready code for agentic AI systems. Implement and integrate agentic AI solutions into existing workflows to automate complex tasks and improve decision-making. Develop and maintain agentic systems for data wrangling, supply chain optimization, and enterprise efficiency projects. Work with LLMs, and other related technologies to create agentic workflows. Integrate NLP and Computer Vision capabilities into agentic workflows to enhance their ability to understand and interact with diverse data sources. Model Development And Deployment Design and develop machine learning models and algorithms to solve simplified business problems. Evaluate and optimize model performance through rigorous testing and experimentation. Deploy and monitor machine learning models in production environments. Implement best practices for model versioning, reproducibility, and explainability. Optimize and deploy NLP and computer vision models for real-time inference. Communication And Collaboration Clearly articulate complex technical concepts to both technical and non-technical audiences. Demonstrate live coding proficiency and effectively explain your code and design decisions. Collaborate with cross-functional teams, including product managers, data scientists, and software engineers. Document code, models, and processes for knowledge sharing and maintainability. Qualifications Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Natural Language Processing, Computer Vision, or a related field. Proven experience in developing and deploying machine learning models, NLP models, and computer vision models, and data pipelines. Strong programming skills in Python and experience with relevant libraries (e.g., TensorFlow, PyTorch, scikit-learn, pandas, NumPy, Hugging Face Transformers, OpenCV, Pillow). Experience with cloud computing platforms (e.g., AWS, GCP, Azure). Experience with database technologies (e.g., SQL, NoSQL). Experience with agentic AI development and LLMs is highly desirable. Excellent problem-solving and analytical skills. Product Engineering background Ability to demonstrate live coding proficiency. Experience in productionizing ML models. Preferred Qualifications Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes). Experience with MLOps practices and tools. Experience with building RAG systems. Experience with deploying and optimizing models for edge devices. Experience with video processing and analysis. This job is provided by Shine.com Show more Show less
Posted 4 days ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Job Title : Power BI Developer Location : Chennai/Hyderabad/Bangalore Candidate Specification Any Graduate, Min 6+ years relevant Experience Job Description Strong proficiency in DAX, Power Query (M), and SQL. Experience in data modelling and creating relationships within datasets. Understanding of ETL processes and data warehousing concepts. Skills Required RolePower BI Developer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required Education Graduation Employment TypeFull Time, Permanent Key Skills POWER BI POWER PLATFORM POWER APPS AWS AZURE Other Information Job CodeGO/JC/174/2025 Recruiter NameSheena Rakesh Key Skills POWER BI POWER PLATFORM POWER APPS AWS AZURE Other Information Job CodeGO/JC/174/2025 Recruiter NameSheena Rakesh Show more Show less
Posted 4 days ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Job Title : Power BI Developer Location : Chennai/Hyderabad/Bangalore Candidate Specification Any Graduate, Min 6+ years relevant Experience Job Description Strong proficiency in DAX, Power Query (M), and SQL. Experience in data modelling and creating relationships within datasets. Understanding of ETL processes and data warehousing concepts. Skills Required RolePower BI Developer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required Education Graduation Employment TypeFull Time, Permanent Key Skills POWER BI POWER PLATFORM POWER APPS AWS AZURE Other Information Job CodeGO/JC/174/2025 Recruiter NameSheena Rakesh Key Skills POWER BI POWER PLATFORM POWER APPS AWS AZURE Other Information Job CodeGO/JC/174/2025 Recruiter NameSheena Rakesh Show more Show less
Posted 4 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Should have experience in Java, JavaScript, JSP, JSF, J2EE, Grammar, EJB, MDB, VXML, XML, REST/SOAP Integration Should have experience in in Voice XML Should have experience in in Agile methodology with preferred experience in SAFe methodology Preferred Viecore Framework Preferred ETL, JCL, COBOL Preferred Experience Working With DevSecOps, GitHub, RTC, Automation Tools. Preferred experience in Call Center Technologies like AVAYA, CISCO, Chatbot etc. Skills Required Preferred knowledge and/or experience working in SalesForce. RoleJava Developer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required Education B. COM, B. Sc., B. Tech CSE, B. TECH ECE, B. Tech., B.C.A, B.E., bachelor of arts, Bachelor of Engineering, Bachelors degree Employment TypeFull Time, Permanent Key Skills AGILE METHODOLOGIES AVAYA ACD CISCO J2EE JAVA SPRING SPRINGBOOT HIBERNATE Other Information Job CodeGO/JC/098/2025 Recruiter NameMithra Dayalan Show more Show less
Posted 4 days ago
5.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
It was nice visiting your profile in portal, One of our top MNC client has critical job position onArtificial Engineer (AI) for Pune Location Please Apply relevant Profiles Candidates Required Skill:Artificial Engineer (AI) Years of Experience:5 to 12 Years, CTC: Can be discussed Notice Period: Immediate Joiners or 15-20 Days or can be discussed Work Location: Pune Interview: Online Candidates should haveAI Experience Job Description About the Role: In this role, you will be at the forefront of developing and deploying cutting-edge AI solutions that directly impact our business. You will leverage your expertise in data and machine learning engineering, natural language processing (NLP), computer vision, and agentic AI, to build scalable and robust systems that drive innovation and efficiency. You will be responsible for the entire AI lifecycle, from data acquisition and preprocessing to model development, deployment, and monitoring. Responsibilities Data and ML Engineering: Design and implement robust data pipelines to extract, transform, and load (ETL) data from diverse structured and unstructured sources (e.g., databases, APIs, text documents, images, videos). Develop and maintain scalable data storage and processing solutions. Perform comprehensive data cleaning, validation, and feature engineering to prepare data for machine learning models. Build and deploy machine learning models for a variety of business applications, including but not limited to process optimization and enterprise efficiency. Web Scraping and Document Processing: Implement web scraping solutions and utilize document processing libraries to extract and process data from various sources. NLP and Computer Vision: Develop and implement NLP models for tasks such as text classification, sentiment analysis, entity recognition, and language generation. Implement computer vision models for image classification, object detection, and image segmentation. Agentic AI Development Design and develop highly scalable production-ready code for agentic AI systems. Implement and integrate agentic AI solutions into existing workflows to automate complex tasks and improve decision-making. Develop and maintain agentic systems for data wrangling, supply chain optimization, and enterprise efficiency projects. Work with LLMs, and other related technologies to create agentic workflows. Integrate NLP and Computer Vision capabilities into agentic workflows to enhance their ability to understand and interact with diverse data sources. Model Development And Deployment Design and develop machine learning models and algorithms to solve simplified business problems. Evaluate and optimize model performance through rigorous testing and experimentation. Deploy and monitor machine learning models in production environments. Implement best practices for model versioning, reproducibility, and explainability. Optimize and deploy NLP and computer vision models for real-time inference. Communication And Collaboration Clearly articulate complex technical concepts to both technical and non-technical audiences. Demonstrate live coding proficiency and effectively explain your code and design decisions. Collaborate with cross-functional teams, including product managers, data scientists, and software engineers. Document code, models, and processes for knowledge sharing and maintainability. Qualifications Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Natural Language Processing, Computer Vision, or a related field. Proven experience in developing and deploying machine learning models, NLP models, and computer vision models, and data pipelines. Strong programming skills in Python and experience with relevant libraries (e.g., TensorFlow, PyTorch, scikit-learn, pandas, NumPy, Hugging Face Transformers, OpenCV, Pillow). Experience with cloud computing platforms (e.g., AWS, GCP, Azure). Experience with database technologies (e.g., SQL, NoSQL). Experience with agentic AI development and LLMs is highly desirable. Excellent problem-solving and analytical skills. Product Engineering background Ability to demonstrate live coding proficiency. Experience in productionizing ML models. Preferred Qualifications Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes). Experience with MLOps practices and tools. Experience with building RAG systems. Experience with deploying and optimizing models for edge devices. Experience with video processing and analysis. This job is provided by Shine.com Show more Show less
Posted 4 days ago
11.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Weβre Hiring: Senior Java Architect Location: Only Hyderabad -Hybrid 3 Days to Office Experience: 11+ years NP: Immediate -Immediate Joiners -15 Days Only Required Skills: Programming Languages: Proficiency in Java. Web Development: Experience with SOAP and RESTful services. Database Management: Strong knowledge of SQL (Oracle). Version Control: Expertise in using version control systems like Git. CI/CD: Familiarity with CI/CD tools such as GitLab CI and Jenkins. Containerization & Orchestration: Experience with Docker and OpenShift. Messaging Queues: Hands-on IBM MQ and Apache Kafka. Cloud Services: Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Adept working experience in design and development of performance efficient ETL flows dealing with millions of rows in volume. Must have experience working in SAFE Agile Scrum project delivery model. Good at writing complex SQL queries to pull data out of RDBMS databases like Oracle, SQL Server, DB2, Teradata, etc. Good working knowledge of Unix scripts. Batch job scheduling software such as CA ESP. Experienced in using CI/CD methodologies. Required Experience Must have 11 - 13 years of hands-on development experience Extensive experience developing and maintaining APIβs Experience managing and/or leading a team of developers. Working knowledge of Data Modelling, solution architecture, normalization, data profiling etc. Adherence to good coding practices, technical documentation, and must be a good team player. Show more Show less
Posted 4 days ago
0 years
0 Lacs
India
On-site
Core data engineering tools and frameworks like Azure, Snowflake, DBT, Python, SQL, Git Solid data modelling and data management fundamentals at all levels to drive the ETL development lifecycle at all layers (conceptual, logical, physical) Git versioning practices for CI/CD DevOps alignment The ability to operate independently, take ownership, and provide mentorship within the team Show more Show less
Posted 4 days ago
5.0 years
0 Lacs
India
On-site
Key Responsibilities: MDM Solution Development: Design and develop master data models, hierarchies, and workflows using Informatica MDM Hub and IDD (Informatica Data Director). Data Integration: Integrate data from various source systems into the MDM platform, ensuring seamless data flow and synchronization. Data Quality Management: Implement data quality rules, validation processes, and exception handling to maintain high data standards. Match & Merge Configuration: Define and configure match and merge rules to identify and consolidate duplicate records. Workflow Automation: Automate data stewardship workflows and approval processes to enhance operational efficiency. Performance Optimization: Monitor and optimize MDM system performance to ensure scalability and responsiveness. Collaboration: Work closely with business stakeholders, data stewards, and IT teams to align MDM solutions with business requirements. Documentation: Maintain comprehensive technical documentation, including design specifications, configuration guides, and troubleshooting manuals. Required Qualifications: Educational Background: Bachelor's degree in Computer Science, Information Technology, or a related field. Experience: Minimum of 5 years of hands-on experience with Informatica MDM, including MDM Hub, IDD, and ActiveVOS. Technical Skills: Proficiency in SQL and experience with relational databases (e.g., Oracle, SQL Server). Familiarity with data integration tools and ETL processes. Knowledge of data governance principles and best practices. Soft Skills: Strong problem-solving abilities, excellent communication skills, and the ability to work effectively in a collaborative team environment. Show more Show less
Posted 4 days ago
7.0 years
0 Lacs
India
On-site
Job Summary/Overview: We are seeking a highly experienced and skilled Senior GCP Data Engineer to design, develop, and maintain data pipelines and data warehousing solutions on the Google Cloud Platform (GCP). This role requires a strong understanding of data engineering principles and a proven track record of success in building and managing large-scale data solutions. The ideal candidate will be proficient in various GCP services and have experience working with large datasets. Key Responsibilities: * Design, develop, and implement robust and scalable data pipelines using GCP services. * Develop and maintain data warehousing solutions on GCP. * Perform data modeling, ETL processes, and data quality assurance. * Optimize data pipeline performance and efficiency. * Collaborate with other engineers and stakeholders to define data requirements and solutions. * Troubleshoot and resolve data-related issues. * Contribute to the development and improvement of data engineering best practices. * Participate in code reviews and ensure code quality. * Document technical designs and processes. Required Qualifications: * Bachelor's degree in Computer Science, Engineering, or a related field. * 7+ years of experience as a Data Engineer. * Extensive experience with GCP services, including BigQuery, Dataflow, Dataproc, Cloud Storage, and Cloud Pub/Sub. * Proven experience designing and implementing data pipelines using ETL/ELT processes. * Experience with data warehousing concepts and best practices. * Strong SQL and data modeling skills. * Experience working with large datasets. Preferred Qualifications: * Master's degree in Computer Science, Engineering, or a related field. * Experience with data visualization tools. * Experience with data governance and compliance. * Experience with containerization technologies (e.g., Docker, Kubernetes). * Experience with Apache Kafka or similar message queuing systems. Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
India
Remote
AWS Data Engineer Location: Remote (India) Experience: 3+ Years Employment Type: Full-Time About the Role: We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus. Key Responsibilities: Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Process and integrate structured and unstructured data, including sensor/IoT and real-time streams Optimize pipeline performance and ensure reliability and fault tolerance Collaborate with cross-functional teams including data scientists and analysts Perform data transformations using Python, Pandas, and SQL Maintain data integrity, quality, and security across the platform Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Support and monitor pipeline workflows, troubleshoot issues, and implement fixes Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions Required Skills and Qualifications: Bachelorβs or Masterβs degree in Computer Science, Information Technology, or related field 3+ years of experience in data engineering using AWS Strong skills in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Solid understanding of data modeling, warehousing, and pipeline orchestration Experience with version control (Git) and infrastructure as code (Terraform) Preferred Skills: Experience working with energy sector dat a or IoT/sensor-based data Exposure to machine learnin g tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn) Familiarity with big data technologie s like Apache Spark, Kafka Experience with data visualization tool s (Tableau, Power BI, AWS QuickSight) Awareness of data governance and catalog tool s such as AWS Data Quality, Collibra, and AWS Databrew AWS Certifications (Data Analytics, Solutions Architect Show more Show less
Posted 4 days ago
7.0 years
0 Lacs
India
On-site
About the Role: We are seeking a passionate and proven Full-Stack Software Engineer I to join our collaborative and fast-paced R&D team. In this role, you will be responsible for implementing software features across our suite of R&D applications. You will work closely with engineers and research scientists to transform business requirements into robust products and features that support data science and research initiatives. Technologies We Use: Python, FastAPI, SQLAlchemy, Postgres, TypeScript, Next.js, AWS, Terraform Key Responsibilities: Design, build, and maintain efficient, reusable, and reliable applications and systems using Python, TypeScript/JavaScript, and AWS Collaborate with end-users to understand requirements, develop use cases, and translate them into scalable technical solutions Develop creative and scalable engineering solutions for structured and unstructured data integration Continuously improve code quality through unit testing, automation, and code reviews Contribute to team discussions to improve our technology stack, coding standards, and product development Required Qualifications: 7+ years of professional software development experience Strong experience with web frameworks such as Next.js and Strapi Proficiency with API frameworks, particularly FastAPI Solid understanding of relational databases and SQL Hands-on experience with CI/CD pipelines Proficiency with AWS or other cloud platforms Strong grasp of OOP principles and software design best practices Ability to work independently with minimal supervision Preferred Qualifications: Experience in Linux/Unix environments Exposure to Agile development methodologies Experience with building cloud-based data pipelines and ETL processes Familiarity with Neo4j or other graph databases Knowledge of C# and .NET Understanding of DevOps and cloud security best practices Experience with Infrastructure as Code (IaC) tools like Terraform Self-motivated and eager to learn new technologies If you're excited about solving challenging problems, working with modern technologies, and contributing to impactful research and data initiatives, we would love to hear from you. Show more Show less
Posted 4 days ago
0 years
0 Lacs
India
Remote
Data Analyst Experience: 4 to 6 Yrs Remote β UK hours Prioritizes and defines backlog items, clarifies requirements, and collaborates with stakeholders to deliver validated technical data mappings aligned with business needs and integration efforts. * Prioritize work, clarify requirements, build relationships with the Product Owner. β’ Define stories and prioritize the Team Backlog to streamline execution. * Analyses and interprets data and create technical data mappings that meets business requirements. * Contribute to integration development process and data validations alongside data engineers. β’ Work with Lead BA, Technical Leads and Engineers to ensure technical mappings are validated and recorded as per agreed design template * Snowflake, SQL * Data Specialist * ETL/ELT Experience Show more Show less
Posted 4 days ago
5.0 - 12.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
It was nice visiting your profile in portal, One of our top MNC client has critical job position onArtificial Engineer (AI) for Pune Location Please Apply relevant Profiles Candidates Required Skill:Artificial Engineer (AI) Years of Experience:5 to 12 Years, CTC: Can be discussed Notice Period: Immediate Joiners or 15-20 Days or can be discussed Work Location: Pune Interview: Online Candidates should haveAI Experience Job Description About the Role: In this role, you will be at the forefront of developing and deploying cutting-edge AI solutions that directly impact our business. You will leverage your expertise in data and machine learning engineering, natural language processing (NLP), computer vision, and agentic AI, to build scalable and robust systems that drive innovation and efficiency. You will be responsible for the entire AI lifecycle, from data acquisition and preprocessing to model development, deployment, and monitoring. Responsibilities Data and ML Engineering: Design and implement robust data pipelines to extract, transform, and load (ETL) data from diverse structured and unstructured sources (e.g., databases, APIs, text documents, images, videos). Develop and maintain scalable data storage and processing solutions. Perform comprehensive data cleaning, validation, and feature engineering to prepare data for machine learning models. Build and deploy machine learning models for a variety of business applications, including but not limited to process optimization and enterprise efficiency. Web Scraping and Document Processing: Implement web scraping solutions and utilize document processing libraries to extract and process data from various sources. NLP and Computer Vision: Develop and implement NLP models for tasks such as text classification, sentiment analysis, entity recognition, and language generation. Implement computer vision models for image classification, object detection, and image segmentation. Agentic AI Development Design and develop highly scalable production-ready code for agentic AI systems. Implement and integrate agentic AI solutions into existing workflows to automate complex tasks and improve decision-making. Develop and maintain agentic systems for data wrangling, supply chain optimization, and enterprise efficiency projects. Work with LLMs, and other related technologies to create agentic workflows. Integrate NLP and Computer Vision capabilities into agentic workflows to enhance their ability to understand and interact with diverse data sources. Model Development And Deployment Design and develop machine learning models and algorithms to solve simplified business problems. Evaluate and optimize model performance through rigorous testing and experimentation. Deploy and monitor machine learning models in production environments. Implement best practices for model versioning, reproducibility, and explainability. Optimize and deploy NLP and computer vision models for real-time inference. Communication And Collaboration Clearly articulate complex technical concepts to both technical and non-technical audiences. Demonstrate live coding proficiency and effectively explain your code and design decisions. Collaborate with cross-functional teams, including product managers, data scientists, and software engineers. Document code, models, and processes for knowledge sharing and maintainability. Qualifications Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Natural Language Processing, Computer Vision, or a related field. Proven experience in developing and deploying machine learning models, NLP models, and computer vision models, and data pipelines. Strong programming skills in Python and experience with relevant libraries (e.g., TensorFlow, PyTorch, scikit-learn, pandas, NumPy, Hugging Face Transformers, OpenCV, Pillow). Experience with cloud computing platforms (e.g., AWS, GCP, Azure). Experience with database technologies (e.g., SQL, NoSQL). Experience with agentic AI development and LLMs is highly desirable. Excellent problem-solving and analytical skills. Product Engineering background Ability to demonstrate live coding proficiency. Experience in productionizing ML models. Preferred Qualifications Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes). Experience with MLOps practices and tools. Experience with building RAG systems. Experience with deploying and optimizing models for edge devices. Experience with video processing and analysis. This job is provided by Shine.com Show more Show less
Posted 4 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Java + Angular Developer Experience: 4β5 Years Location: [Pune] Job Summary We are seeking a highly skilled Java + Angular Developer with 4β5 years of hands-on experience in full-stack development. The ideal candidate will have strong expertise in Java, Spring Boot, and Angular, along with a solid understanding of front-end technologies and backend integration. Backend Technical Skillset Required: ο·Java 8+ ο·Spring Boot, Spring MVC, Spring Webservices, Spring Data ο·Hibernate ο·JasperReports ο·Oracle SQL, PL/SQL Development ο·Pentaho Kettle (ETL tool) ο·Basic Linux scripting and troubleshooting ο·GIT (version control) ο·Strong grasp of Design Patterns Frontend ο·Angular 8+ ο·React 16+ (Good to have) ο·Angular Material ο·Bootstrap 4 ο· HTML5, CSS3, SCSS ο· JavaScript & TypeScript Job Responsibilities ο· Design, develop, and maintain web applications using Java and Angular frameworks ο· Develop scalable backend services using Spring Boot and integrate with frontend ο· Collaborate with cross-functional teams for end-to-end delivery ο· Write clean, testable, and efficient code following best practices ο· Perform code reviews and contribute to team knowledge sharing ο· Troubleshoot and resolve production issues as needed ο· Use version control systems like Git for collaborative development ο· Experience working with Pentaho Kettle or similar ETL tools Nice To Have ο· Exposure to basic DevOps and deployment practices ο· Familiarity with Agile/Scrum methodologies Skills: hibernate,pentaho kettle,scss,oracle,angular,git,css3,pl/sql,java 8+,boot,oracle sql,spring mvc,spring boot,jasper reports,angular material,react 16+,html5,spring,bootstrap 4,kettle,spring webservices,sql,spring data,angular 8+,jasperreports,design patterns,pl/sql development,linux scripting,java,design,typescript,patterns,full stack development,pentaho,microservices,javascript Show more Show less
Posted 4 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
You Lead the Way. Weβve Got Your Back. At American Express, we know that with the right backing, people and businesses have the power to progress in incredible ways. Whether weβre supporting our customersβ financial confidence to move ahead, taking commerce to new heights, or encouraging people to explore the world, our colleagues are constantly redefining whatβs possible β and weβre proud to back each other every step of the way. When you join #TeamAmex, you become part of a diverse community of over 60,000 colleagues, all with a common goal to deliver an exceptional customer experience every day. We back our colleagues with the support they need to thrive, professionally and personally. Thatβs why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture. We are building an energetic, high-performance team with a nimble and creative mindset to drive our technology and products. American Express (AXP) is a powerful brand, a great place to work and has unparalleled scale. Join us for an exciting opportunity in the Marketing Technology within American Express Technologies. How will you make an impact in this role? There are hundreds of opportunities to make your mark on technology and life at American Express. Here's just some of what you'll be doing: As a part of our team, you will be developing innovative, high quality, and robust operational engineering capabilities. Develop software in our technology stack which is constantly evolving but currently includes Big data, Spark, Python, Scala, GCP, Adobe Suit ( like Customer Journey Analytics ). Work with Business partners and stakeholders to understand functional requirements, architecture dependencies, and business capability roadmaps. Create technical solution designs to meet business requirements. Define best practices to be followed by team. Taking your place as a core member of an Agile team driving the latest development practices Identify and drive reengineering opportunities, and opportunities for adopting new technologies and methods. Suggest and recommend solution architecture to resolve business problems. Perform peer code review and participate in technical discussions with the team on the best solutions possible. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers' digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. American Express offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology of #TeamAmex. Minimum Qualifications: Β· BS or MS degree in computer science, computer engineering, or other technical discipline, or equivalent work experience. Β· 5+ years of hands-on software development experience with Big Data & Analytics solutions β Hadoop Hive, Spark, Scala, Hive, Python, shell scripting, GCP Cloud Big query, Big Table, Airflow. Β· Working knowledge of Adobe suit like Adobe Experience Platform, Adobe Customer Journey Analytics, CDP. Β· Proficiency in SQL and database systems, with experience in designing and optimizing data models for performance and scalability. Β· Design and development experience with Kafka, Real time ETL pipeline, API is desirable. Β· Experience in designing, developing, and optimizing data pipelines for large-scale data processing, transformation, and analysis using Big Data and GCP technologies. Β· Certifications in cloud platform (GCP Professional Data Engineer) is a plus. Β· Understanding of distributed (multi-tiered) systems, data structures, algorithms & Design Patterns. Β· Strong Object-Oriented Programming skills and design patterns. Β· Experience with CICD pipelines, Automated test frameworks, and source code management tools (XLR, Jenkins, Git, Maven). Β· Good knowledge and experience with configuration management tools like GitHub Β· Ability to analyze complex data engineering problems, propose effective solutions, and implement them effectively. Β· Looks proactively beyond the obvious for continuous improvement opportunities. Β· Communicates effectively with product and cross functional team. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 4 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Responsibilities: 1. Architect and develop scalable AI applications focused on indexing, retrieval systems, and distributed data processing. 2. Collaborate closely with framework engineering, data science, and full-stack teams to deliver an integrated developer experience for building next-generation context-aware applications (i.e., Retrieval-Augmented Generation (RAG)). 3. Design, build, and maintain scalable infrastructure for high-performance indexing, search engines, and vector databases (e.g., Pinecone, Weaviate, FAISS). 4. Implement and optimize large-scale ETL pipelines, ensuring efficient data ingestion, transformation, and indexing workflows. 5. Lead the development of end-to-end indexing pipelines, from data ingestion to API delivery, supporting millions of data points. 6. Deploy and manage containerized services (Docker, Kubernetes) on cloud platforms (AWS, Azure, GCP) via infrastructure-as-code (e.g., Terraform, Pulumi). 7. Collaborate on building and enhancing user-facing APIs that provide developers with advanced data retrieval capabilities. 8. Focus on creating high-performance systems that scale effortlessly, ensuring optimal performance in production environments with massive datasets. 9. Stay updated on the latest advancements in LLMs, indexing techniques, and cloud technologies to integrate them into cutting-edge applications. 10. Drive ML and AI best practices across the organization to ensure scalable, maintainable, and secure AI infrastructure. Show more Show less
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2