Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
4 - 6 Lacs
Bengaluru
On-site
Degree, Post graduate in Computer Science or related field (or equivalent industry experience)with background in Mathematics and Statistics Minimum 5+ years of development and design experience in experience as Data Engineer Experience on Big Data platforms and distributed computing (e.g. Hadoop, Map/Reduce, Spark, HBase, Hive) Experience in data pipeline software engineering and best practice in python (linting, unit tests, integration tests, git flow/pull request process, object-oriented development, data validation, algorithms and data structures, technical troubleshooting and debugging, bash scripting ) Experience in Data Quality Assessment (profiling, anomaly detection) and data documentation (schema, dictionaries) Experience in data architecture, data warehousing and modelling techniques (Relational, ETL, OLTP) and consider performance alternatives Used SQL, PL/SQL or T-SQL with RDBMSs production environments, no-SQL databases nice to have Linux OS configuration and use, including shell scripting. Well versed with Agile, DevOps and CI/CD principles (GitHub, Jenkins etc.), and actively involved in solving, troubleshooting issues in distributed services ecosystem Experience in Agile methodology. Ensure quality of technical and application architecture and design of systems across the organization. Effectively research and benchmark technology against other best in class technologies. Experience in Banking, Financial and Fintech experience in an enterprise environment preferred Able to influence multiple teams on technical considerations, increasing their productivity and effectiveness, by sharing deep knowledge and experience. Self-motivator and self-starter, Ability to own and drive things without supervision and works collaboratively with the teams across the organization. Have excellent soft and interpersonal skills to interact and present the ideas to team. The engineer should've good listening skills and speaks clearly in front of team, stakeholders and management. The engineer should always carry positive attitude towards work and establishes effective team relations and builds a climate of trust within the team. Should be enthusiastic and passionate and creates a motivating environment for the team. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who We Are Wayfair is moving the world so that anyone can live in a home they love – a journey enabled by more than 3,000 Wayfair engineers and a data-centric culture. Wayfair’s Advertising business is rapidly expanding, adding hundreds of millions of dollars in profits to Wayfair. We are building Sponsored Products, Display & Video Ad offerings that cater to a variety of Advertiser goals while showing highly relevant and engaging Ads to millions of customers. We are evolving our Ads Platform to empower advertisers across all sophistication levels to grow their business on Wayfair at a strong, positive ROI and are leveraging state of the art Machine Learning techniques. The Advertising Optimization & Automation Science team is central to this effort. We leverage machine learning and generative AI to streamline campaign workflows, delivering impactful recommendations on budget allocation, target Return on Ad Spend (tROAS), and SKU selection. Additionally, we are developing intelligent systems for creative optimization and exploring agentic frameworks to further simplify and enhance advertiser interactions. We are looking for an experienced Senior Machine Learning Scientist to join the Advertising Optimization & Automation Science team. In this role, you will be responsible for building intelligent, ML-powered systems that drive personalized recommendations and campaign automation within Wayfair’s advertising platform. You will work closely with other scientists, as well as members of our internal Product and Engineering teams, to apply your ML expertise to define and deliver 0-to-1 capabilities that unlock substantial commercial value and directly enhance advertiser outcomes. What You’ll do Design and build intelligent budget, tROAS, and SKU recommendations, and simulation-driven decisioning that extends beyond the current advertising platform capabilities. Lead the next phase of GenAI-powered creative optimization and automation to drive significant incremental ad revenue and improve supplier outcomes. Raise technical standards across the team by promoting best practices in ML system design and development. Partner cross-functionally with Product, Engineering, and Sales to deliver scalable ML solutions that improve supplier campaign performance. Ensure systems are designed for reuse, extensibility, and long-term impact across multiple advertising workflows. Research and apply best practices in advertising science, GenAI applications in creative personalization, and auction modeling. Keep Wayfair at the forefront of innovation in supplier marketing optimization. Collaborate with Engineering teams (AdTech, ML Platform, Campaign Management) to build and scale the infrastructure needed for automated, intelligent advertising decisioning. We Are a Match Because You Have : Bachelor's or Master’s degree in Computer Science, Mathematics, Statistics, or related field. 9+ years of experience in building large scale machine learning algorithms. 4+ years of experience working in an architect or technical leadership position. Strong theoretical understanding of statistical models such as regression, clustering and ML algorithms such as decision trees, neural networks, transformers and NLP techniques. Proficiency in programming languages such as Python and relevant ML libraries (e.g., TensorFlow, PyTorch) to develop production-grade products. Strategic thinker with a customer-centric mindset and a desire for creative problem solving, looking to make a big impact in a growing organization. Demonstrated success influencing senior level stakeholders on strategic direction based on recommendations backed by in-depth analysis; Excellent written and verbal communication. Ability to partner cross-functionally to own and shape technical roadmaps Intellectual curiosity and a desire to always be learning! Nice to have Experience with GCP, Airflow, and containerization (Docker). Experience building scalable data processing pipelines with big data tools such as Hadoop, Hive, SQL, Spark, etc. Familiarity with Generative AI and agentic workflows. Experience in Bayesian Learning, Multi-armed Bandits, or Reinforcement Learning. About Wayfair Inc. Wayfair is one of the world’s largest online destinations for the home. Through our commitment to industry-leading technology and creative problem-solving, we are confident that Wayfair will be home to the most rewarding work of your career. If you’re looking for rapid growth, constant learning, and dynamic challenges, then you’ll find that amazing career opportunities are knocking. No matter who you are, Wayfair is a place you can call home. We’re a community of innovators, risk-takers, and trailblazers who celebrate our differences, and know that our unique perspectives make us stronger, smarter, and well-positioned for success. We value and rely on the collective voices of our employees, customers, community, and suppliers to help guide us as we build a better Wayfair – and world – for all. Every voice, every perspective matters. That’s why we’re proud to be an equal opportunity employer. We do not discriminate on the basis of race, color, ethnicity, ancestry, religion, sex, national origin, sexual orientation, age, citizenship status, marital status, disability, gender identity, gender expression, veteran status, genetic information, or any other legally protected characteristic. We are interested in retaining your data for a period of 12 months to consider you for suitable positions within Wayfair. Your personal data is processed in accordance with our Candidate Privacy Notice (which can found here: https://www.wayfair.com/careers/privacy). If you have any questions regarding our processing of your personal data, please contact us at dataprotectionofficer@wayfair.com. If you would rather not have us retain your data please contact us anytime at dataprotectionofficer@wayfair.com.
Posted 1 week ago
5.0 - 8.0 years
15 - 25 Lacs
Kolkata, Chennai, Bengaluru
Hybrid
Global Gen AI Developer Enabling a software-defined, electrified future. Visteon is a technology company that develops and builds innovative digital cockpit and electrification products at the leading-edge of the mobility revolution. Founded in 2000, Visteon brings decades of automotive intelligence combined with Silicon Valley speed to apply global insights that help transform the software-defined vehicle of the future for many of the worlds largest OEMs. The company employs 10,000 employees in 18 countries around the globe. To know more about us click here. Mission of the Role: Facilitate Enterprise machine learning and artificial intelligence solutions using the latest technologies Visteon is adopting globally. Key Objectives of this Role: The primary goal of the Global ML/AI Developer is to leverage advanced machine learning and artificial intelligence techniques to develop innovative solutions that drive Visteons strategic initiatives. By collaborating with cross-functional teams and stakeholders, this role identifies opportunities for AI-driven improvements, designs and implements scalable ML models, and integrates these models into existing systems to enhance operational efficiency. Following development best practices, fostering a culture of continuous learning, and staying abreast of AI advancements, the Global ML/AI Developer ensures that all AI solutions align with organizational goals, support data-driven decision-making, and continuously improve Visteons technological capabilities. Qualification, Experience and Skills: 6-8 Yrs Technical Skills: Expertise in machine learning frameworks (e.g., TensorFlow, PyTorch), programming languages (e.g., Python, R, SQL), and data processing tools (e.g., Apache Spark, Hadoop). Proficiency in developing, training, and deploying ML models, including supervised and unsupervised learning, deep learning, and reinforcement learning. Strong understanding of data engineering concepts, including data preprocessing, feature engineering, and data pipeline development. Experience with cloud platforms (preferably Microsoft Azure) for deploying and scaling ML solutions. Business Acumen : Strong business analysis and ability to translate complex technical concepts into actionable business insights and recommendations. Key Behaviors: Innovation: Continuously seeks out new ideas, technologies, and methodologies to improve AI/ML solutions and drive the organization forward. Attention to Detail: Pays close attention to all aspects of the work, ensuring accuracy and thoroughness in data analysis, model development, and documentation. Effective Communication: Clearly and effectively communicates complex technical concepts to non-technical stakeholders, ensuring understanding and alignment across the organization.
Posted 1 week ago
3.0 - 4.0 years
0 Lacs
India
On-site
Description GroundTruth is an advertising platform that turns real-world behavior into marketing that drives in-store visits and other real business results. We use observed real-world consumer behavior, including location and purchase data, to create targeted advertising campaigns across all screens, measure how consumers respond, and uncover unique insights to help optimize ongoing and future marketing efforts. With this focus on media, measurement, and insights, we provide marketers with tools to deliver media campaigns that drive measurable impact, such as in-store visits, sales, and more. Learn more at groundtruth.com. We believe that innovative technology starts with the best talent and have been ranked one of Ad Age’s Best Places to Work in 2021, 2022, 2023 & 2025! Learn more about the perks of joining our team here. About Team GroundTruth seeks a Data Engineering Software Engineer to join our Attribution team. The Attribution Team specialises in designing and managing data pipelines that capture and connect user engagement data to optimise ad performance. We engineer scalable solutions for accurate and real-world attribution across the GroundTruth ecosystem. We engineer seamless data flows that fuel reliable analytics and decision-making using big data technologies, such as MapReduce, Spark, and Glue. We take pride in building an Engineering Team composed of strong communicators who collaborate with multiple business and engineering stakeholders to find compromises and solutions. Our engineers are organised and detail-oriented team players who are problem solvers with a maker mindset. As a Software Engineer (SE) on our Integration Team, you will build solutions that add new capabilities to our platform. You Will Create and maintain various ingestion pipelines for the GroundTruth platform. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Work with stakeholders, including the Product, Analytics and Client Services teams to assist with data-related technical issues and support their data infrastructure needs. Prepare detailed specifications and low-level design. Participate in code reviews. Test the product in controlled, real situations before going live. Maintain the application once it is live. Contribute ideas to improve the data platform. You Have Tech./B.E./M.Tech./MCA or equivalent in computer science 3-4 years of experience in Software Engineering Experience with data ingestion pipeline. Experience with AWS Stack used for Data engineering EC2, S3, EMR, ECS, Lambda, and Step functions Hands-on experience with Python/Java for the orchestration of data pipelines Experience in writing analytical queries using SQL Experience in Airflow Experience in Docker Proficient in Git How can you impress us? Knowledge of REST APIs. Any experience with big data technologies like Hadoop, MapReduce, and Pig is a plus Knowledge of shell scripting. Experience with BI tools like Looker. Experience with DB maintenance. Experience with Amazon Web Services and Docker. Configuration management and QA practices. Benefits At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Breakfast, Lunch, Snacks and Beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program). Internet reimbursement/Postpaid cell phone bill/or both. Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
India
Remote
JD: AWS Data Engineer Exp Range: 7 to 11 Years Location: Remote Shift Timings: 12 PM to 9 PM Primary Skills: Python, Pyspark, SQL, AWS JD Responsibilities Data Architecture: Develop and maintain the overall data architecture, ensuring scalability, performance, and data quality. AWS Data Services: Expertise in using AWS data services such as AWS Glue, S3, SNS, SES, Dynamo DB, Redshift, Cloud formation, Cloud watch, IAM, DMS, Event bridge scheduler etc. Data Warehousing: Design and implement data warehouses on AWS, leveraging AWS Redshift or other suitable options. Data Lakes: Build and manage data lakes on AWS using AWS S3 and other relevant services. Data Pipelines: Design and develop efficient data pipelines to extract, transform, and load data from various sources. Data Quality: Implement data quality frameworks and best practices to ensure data accuracy, completeness, and consistency. Cloud Optimization: Optimize data engineering solutions for performance, cost-efficiency, and scalability on the AWS cloud. Team Leadership: Mentor and guide data engineers, ensuring they adhere to best practices and meet project deadlines. Qualifications Bachelor’s degree in computer science, Engineering, or a related field. 6-7 years of experience in data engineering roles, with a focus on AWS cloud platforms. Strong understanding of data warehousing and data lake concepts. Proficiency in SQL and at least one programming language (Python/Pyspark). Good to have - Experience with any big data technologies like Hadoop, Spark, and Kafka. Knowledge of data modeling and data quality best practices. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a team. Preferred Qualifications Certifications in AWS Certified Data Analytics - Specialty or AWS Certified Solutions Architect - Data. If Intrested. Please submit your CV to Khushboo@Sourcebae.com or share it via WhatsApp at 8827565832 khuStay updated with our latest job opportunities and company news by following us on LinkedIn: :https://www.linkedin.com/company/sourcebae
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Overview: We are seeking a talented Data Engineer with expertise in Apache Spark, Python / Java and distributed systems. The ideal candidate will be skilled in creating and managing data pipelines using AWS. Key Responsibilities: Design, develop, and implement data pipelines for ingesting, transforming, and loading data at scale. Utilise Apache Spark for data processing and analysis. Utilise AWS services (S3, Redshift, EMR, Glue) to build and manage efficient data pipelines. Optimise data pipelines for performance and scalability, considering factors like partitioning, bucketing, and caching. Write efficient and maintainable Python code. Implement and manage distributed systems for data processing. Collaborate with cross-functional teams to understand data requirements and deliver optimal solutions. Ensure data quality and integrity throughout the data lifecycle. Qualifications: Proven experience with Apache Spark and Python / Java. Strong knowledge of distributed systems. Proficiency in creating data pipelines with AWS. Excellent problem-solving and analytical skills. Ability to work independently and as part of a team. Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent experience). Proven experience in designing and developing data pipelines using Apache Spark and Python. Experience with distributed systems concepts (Hadoop, YARN) is a plus. In-depth knowledge of AWS cloud services for data engineering (S3, Redshift, EMR, Glue). Familiarity with data warehousing concepts (data modeling, ETL) is preferred. Strong programming skills in Python (Pandas, NumPy, Scikit-learn are a plus). Experience with data pipeline orchestration tools (Airflow, Luigi) is a plus. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications: Experience with additional AWS services (e.g., AWS Glue, AWS Lambda, Amazon Redshift). Familiarity with data warehousing and ETL processes. Knowledge of data governance and best practices. Have a good understanding of the oops concept. Hands-on experience with SQL database design Experience with Python, SQL, and data visualization/exploration tools
Posted 1 week ago
1.0 years
0 Lacs
Chandigarh, Chandigarh
On-site
Role: Big Data Engineer (Fresher) Experience: 0–1 Years Location: Chandigarh Responsibilities As an entry-level Big Data Engineer, you will work closely with experienced team members to help design, build, and maintain high-performance data solutions. You will assist in developing scalable pipelines, Spark-based processing jobs, and contribute to RESTful services that support data-driven products. This is a hands-on learning opportunity where you will be mentored and exposed to real-world Big Data technologies, DevOps practices, and collaborative agile teams. Your key responsibilities will include: Assisting in the design and development of data pipelines and streaming applications. Learning to work with distributed systems and Big Data frameworks. Supporting senior engineers in writing and testing code for data processing. Participating in code reviews, team discussions, and product planning sessions. Collaborating with cross-functional teams including product managers and QA. Qualifications and Skills Bachelor's degree in Computer Science, Engineering, or related field. Good understanding of core programming concepts (Java, Python, or Scala preferred). Familiarity with SQL and NoSQL databases. Basic knowledge of Big Data tools such as Spark, Hadoop, Kafka (academic/project exposure acceptable). Exposure to Linux/Unix environments. Awareness of Agile methodologies (Scrum, Kanban) and DevOps tools like Git. Curiosity to learn cloud platforms like AWS or GCP (certifications a plus). Willingness to learn about system security (Kerberos, TLS, etc.). Nice to Have (Not Mandatory): Internships, academic projects, or certifications related to Big Data. Contributions to open-source or personal GitHub projects. Familiarity with containerization (Docker, Kubernetes) or CI/CD tools. Job Types: Full-time, Permanent Pay: Up to ₹331,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Rotational shift Supplemental Pay: Performance bonus Work Location: In person
Posted 1 week ago
6.0 - 8.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Qualification 6-8 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Good to have: Role Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Experience 6 to 8 years Job Reference Number 13024
Posted 1 week ago
12.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Qualification BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Role Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience 10 to 18 years Job Reference Number 12895
Posted 1 week ago
0 years
0 Lacs
Ballabgarh, Haryana, India
On-site
Revenir aux offres Stagiaire Business Analyst Data (LOB25-STA-06) Nature Data Business Analyst Contrat Stage 6 mois Expérience Moins d'1 an Lieu de travail Paris / Région parisienne A Propos Missions Le stage s’inscrit dans le cadre de la mise en place d’un SI d’envergure pour la collecte et l’utilisation des données Sociales Nominatives (DSN) pour un organisme du Secteur Public. Nées d’une décision politique pour la simplification des relations entre les entreprises et les organismes sociaux, la Déclaration Sociale Nominative est désormais largement rependue et utilisée par la majorité des entreprises et remplace la majorité des déclarations sociales françaises périodiques ou événementielles. Les DSN embarquent une richesse métier importante ainsi qu’une volumétrie très conséquente, avec des usages très nombreux : interrogation de données en temps réel pour des actions telles que le contrôle des entreprise, le calcul de données telles que les effectifs et la masse salariale ou l’analyse statistique. Face à la richesse de ces données, cet organisme a lancé un important projet de refonte de sa brique SI de collecte et d’utilisation des DSN dans une architecture BIG DATA. Sous la responsabilité d’un Product Owner, vous serez intégré dans une équipe de Business Analyst de 7 personnes et vous interviendrez sur la définition et la validation des sprint et des livraisons des Data Engineer. Dans ce cadre, vous serez formé et encadré sur les méthodologies de mise en œuvre de solution DATA. Descriptif du poste Travaux Assurés Montée en compétence fonctionnelle sur les données de la DSN afin d’appréhender les enjeux du projet, le périmètre de données et les cas d’usage afférents Apprentissage de la méthodologie agile (Scrum) Participation aux travaux de spécifications et de validation des sprints, avec un enjeu important sur l’automatisation des tests et les tests de non régression. Dans cette optique, le stagiaire sera amené à mettre en place des programmes d’automatisation qui nécessiteront quelques développements. Le stage s’adresse donc à un profil désireux d’intervenir dans un cadre technico-fonctionnel. Participation aux cérémonies agiles et aux travaux de pilotage Vous bénéficierez de toute l’expertise de LOBELLIA Conseil sur le volet métier et sur la conduite de projet agiles. Ce Stage Vous Permettra D’acquérir La vision architecturale d’un système BIG DATA d’envergure Un cas pratique de compréhension et d’utilisation de données d’envergure Une vision de la démarche d’un projet DATA multi-équipe en mode agile Les technologies utilisées sur les différents sujets sont : Suite Hadoop (Hdfs, Oozie, Yarn, Spark, Hive) Accès aux données : MobaXterm, Zeppelin, MIT Kerberos, DBeaver Langage de programmation : HQL (simili SQL) + Python Outils de travail : Sharepoint, Redmine, Git, Visual Studio Code, Excel Profil recherché Etudiant en dernière année d’école d’ingénieur ou Master 2 scientifique. Qualités requises : Appétence technico-fonctionnelle Qualités rédactionnelles Esprit d’analyse Rigueur Sens du service Aisance relationnelle Postuler Ce champs est requis. Ce champs est requis. Ce mail n'est pas valide. CV ** Ce champs est requis. Lettre de motivation Vous nous avez connus par... Les réseaux sociaux Un forum ou un événement école Une connaissance Autre Champs requis Fichier requis, au format pdf, poids inférieur à 5Mo Merci, votre mail a été envoyé.
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Qualification OLAP, Data Engineering, Data warehousing, ETL Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake Experience in writing and troubleshooting SQL programming or MDX queries Experience of working on Linux Experience in Microsoft Analysis services (SSAS) or OLAP tools Tableau or Micro strategy or any BI tools Expertise of programming in Python, Java or Shell Script would be a plus Role Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for prospects regarding technical issues during POV stage. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience 3 to 6 years Job Reference Number 10350
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Qualification Required Proven hands-on experience on designing, developing and supporting Database projects for analysis in a demanding environment. Proficient in database design techniques – relational and dimension designs Experience and a strong understanding of business analysis techniques used. High proficiency in the use of SQL or MDX queries. Ability to manage multiple maintenance, enhancement and project related tasks. Ability to work independently on multiple assignments and to work collaboratively within a team is required. Strong communication skills with both internal team members and external business stakeholders Added Advanatage Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake will be an added advantage. Experience of working on Linux system Experience of Tableau or Micro strategy or Power BI or any BI tools will be an added advantage. Expertise of programming in Python, Java or Shell Script would be a plus Role Roles & Responsibilities Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for customers regarding technical issues during the project. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience 5 to 10 years Job Reference Number 11078
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Qualification Pre-Sales Solution Engineer - India Experience Areas Or Skills Pre-Sales experience of Software or analytics products Excellent verbal & written communication skills OLAP tools or Microsoft Analysis services (MSAS) Data engineering or Data warehouse or ETL Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Tableau or Micro strategy or any BI tool Hive QL or Spark SQL or PLSQL or TSQL Writing and troubleshooting SQL programming or MDX queries Working on Linux programming in Python, Java or Java Script would be a plus Filling RFP or Questioner from Customer NDA, Success Criteria, Project closure and other Documentation Be willing to travel or relocate as per requirement Role Acts as main point of contact for Customer contacts involved in the evaluation process Product demonstrations to qualified leads Product demonstrations in support of marketing activity such as events or webinars Own RFP, NDA, PoC success criteria document, POC Closure and other documents Secures alignment on Process and documents with the customer / prospect Owns the technical win phases of all active opportunities Understand Customer domain and database schema Providing OLAP and Reporting solution Work closely with customers for understanding and resolving environment or OLAP cube or reporting related issues Co-ordinate with solutioning team for execution of PoC as per success plan Creates enhancement requests or identify requests for new features on behalf of customers or hot prospects Experience 3 to 6 years Job Reference Number 10771
Posted 1 week ago
8.0 - 10.0 years
30 - 32 Lacs
Hyderabad
Work from Office
Candidate Specifications: Candidate should have 9+ years of experience. Candidates should have 9+ years of experience in Python and Pyspark Candidate should have strong experience in AWS and PLSQL. Candidates should be strong in Data management with data governance and data streaming along with data lakes and data-warehouse Candidates should also have exposure in Team handling and stakeholder management skills. Candidate should have excellent in written and verbal communication skills. Contact Person: Sheena Rakesh
Posted 1 week ago
15.0 years
0 Lacs
Greater Lucknow Area
On-site
Qualification 15+ years of experience in the role of managing and implementing of high-end software products. Expertise in Java/ J2EE or EDW/SQL OR Hadoop/Hive/Spark and preferably hands-on. Good knowledge* of any of the Cloud (AWS/Azure/GCP) – Must Have Managed/ delivered and implemented complex projects dealing with considerable data size (TB/ PB) and with high complexity Experience in handling migration projects Good To Have Data Ingestion, Processing and Orchestration knowledge Role Senior Technical Project Managers (STPMs) are in charge of handling all aspects of technical projects. This is a multi-dimensional and multi-functional role. You will need to be comfortable reporting program status to executives, as well as diving deep into technical discussions with internal engineering teams and external partners. You should collaborate with, and leverage, colleagues in business development, product management, analytics, marketing, engineering, and partner organizations. You have to manage multiple projects and ensures all releases on time. You are responsible for manage and deliver the technical solution to support an organization’s vision and strategic direction. The technology program manager delivers the technical solution to support an organization’s vision and strategic direction. You should be capable to working with a different type of customer and should possess good customer handling skills. Experience in working in ODC model and capable of presenting the Technical Design and Architecture to Senior Technical stakeholders. Should have experience in defining the project and delivery plan for each assignment Capable of doing resource allocations as per the requirements for each assignment Should have experience of driving RFPs. Should have experience of Account management – Revenue Forecasting, Invoicing, SOW creation etc. Experience 15 to 20 years Job Reference Number 13010
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Description ZiffyHealth is a BHARAT-focused IoT-enabled AI-driven health-tech platform, striving to bridge the disparity in healthcare professional availability between urban and rural India. Based on a Big-data Hadoop ecosystem, ZiffyHealth provides a 360° integrated healthcare platform supported by the Atal Innovation Mission, NITI Aayog, Government of India. Our mission is to make healthcare more accessible and affordable using cutting-edge technology and AI-powered processes. We aim to create a world where everyone can lead a healthy and productive life. Role Description This is a full-time, on-site role for a Telesales Representative located in Pune. The Telesales Representative will be responsible for making outbound sales calls, providing customer support, managing customer inquiries, and delivering excellent customer service. The representative will also assist in training new team members and contribute to achieving sales targets. Qualifications Strong Communication skills Customer Service and Customer Support experience Sales skills Experience in Training team members Excellent interpersonal and problem-solving abilities Ability to work in a fast-paced environment Experience in the healthcare industry is a plus Bachelor's degree in Business Administration, Marketing, or a related field
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Backend & MLOps Engineer – Integration, API, and Infrastructure Expert 1. Role Objective: Responsible for building robust backend infrastructure, managing ML operations, and creating scalable APIs for AI applications. Must excel in deploying and maintaining AI products in production environments with high availability and security standards. The engineer will be expected to build secure, scalable backend systems that integrate AI models into services (REST, gRPC), manage data pipelines, enable model versioning, and deploy containerized applications in secure (air-gapped) Naval infrastructure. 2. Key Responsibilities: 2.1. Create RESTful and/or gRPC APIs for model services. 2.2. Containerize AI applications and maintain Kubernetes-compatible Docker images. 2.3. Develop CI/CD pipelines for model training and deployment. 2.4. Integrate models as microservices using TorchServe, Triton, or FastAPI. 2.5. Implement observability (metrics, logs, alerts) for deployed AI pipelines. 2.6. Build secured data ingestion and processing workflows (ETL/ELT). 2.7. Optimize deployments for CPU/GPU performance, power efficiency, and memory usage 3. Educational Qualifications Essential Requirements: 3.1. B.Tech/ M.Tech in Computer Science, Information Technology, or Software Engineering. 3.2. Strong foundation in distributed systems, databases, and cloud computing. 3.3. Minimum 70% marks or 7.5 CGPA in relevant disciplines. Professional Certifications: 3.4. AWS Solutions Architect/DevOps Engineer Professional 3.5. Google Cloud Professional ML Engineer or DevOps Engineer 3.6. Azure AI Engineer or DevOps Engineer Expert. 3.7. Kubernetes Administrator (CKA) or Developer (CKAD). 3.8. Docker Certified Associate Core Skills & Tools 4. Backend Development: 4.1. Languages: Python, FastAPI, Flask, Go, Java, Node.js, Rust (for performance-critical components) 4.2. Web Frameworks: FastAPI, Django, Flask, Spring Boot, Express.js. 4.3. API Development: RESTful APIs, GraphQL, gRPC, WebSocket connections. 4.4. Authentication & Security: OAuth 2.0, JWT, API rate limiting, encryption protocols. 5. MLOps & Model Management: 5.1. ML Platforms: MLflow, Kubeflow, Apache Airflow, Prefect 5.2. Model Serving: TensorFlow Serving, TorchServe, ONNX Runtime, NVIDIA Triton, BentoML 5.3. Experiment Tracking: Weights & Biases, Neptune, ClearML 5.4. Feature Stores: Feast, Tecton, Amazon SageMaker Feature Store 5.5. Model Monitoring: Evidently AI, Arize, Fiddler, custom monitoring solutions 6. Infrastructure & DevOps: 6.1. Containerization: Docker, Podman, container optimization. 6.2. Orchestration: Kubernetes, Docker Swarm, OpenShift. 6.3. Cloud Platforms: AWS, Google Cloud, Azure (multi-cloud expertise preferred). 6.4. Infrastructure as Code: Terraform, CloudFormation, Pulumi, Ansible. 6.5. CI/CD: Jenkins, GitLab CI, GitHub Actions, ArgoCD. 6.6. DevOps & Infra: Docker, Kubernetes, NGINX, GitHub Actions, Jenkins. 7. Database & Storage: 7.1. Relational: PostgreSQL, MySQL, Oracle (for enterprise applications) 7.2. NoSQL: MongoDB, Cassandra, Redis, Elasticsearch 7.3. Vector Databases: Pinecone, Weaviate, Chroma, Milvus 7.4. Data Lakes: Apache Spark, Hadoop, Delta Lake, Apache Iceberg 7.5. Object Storage: AWS S3, Google Cloud Storage, MinIO 7.6. Backend: Python (FastAPI, Flask), Node.js (optional) 7.7. DevOps & Infra: Docker, Kubernetes, NGINX, GitHub Actions, Jenkins 8. Secure Deployment: 8.1. Military-grade security protocols and compliance 8.2. Air-gapped deployment capabilities 8.3. Encrypted data transmission and storage 8.4. Role-based access control (RBAC) & IDAM integration 8.5. Audit logging and compliance reporting 9. Edge Computing: 9.1. Deployment on naval vessels with air gapped connectivity. 9.2. Optimization of applications for resource-constrained environment. 10. High Availability Systems: 10.1. Mission-critical system design with 99.9% uptime. 10.2. Disaster recovery and backup strategies. 10.3. Load balancing and auto-scaling. 10.4. Failover mechanisms for critical operations. 11. Cross-Compatibility Requirements: 11.1. Define and expose APIs in a documented, frontend-consumable format (Swagger/OpenAPI). 11.2. Develop model loaders for AI Engineer's ONNX/ serialized models. 11.3. Provide UI developers with test environments, mock data, and endpoints. 11.4. Support frontend debugging, edge deployment bundling, and user role enforcement. 12. Experience Requirements 12.1. Production experience with cloud platforms and containerization. 12.2. Experience building and maintaining APIs serving millions of requests. 12.3. Knowledge of database optimization and performance tuning. 12.4. Experience with monitoring and alerting systems. 12.5. Architected and deployed large-scale distributed systems. 12.6. Led infrastructure migration or modernization projects. 12.7. Experience with multi-region deployments and disaster recovery. 12.8. Track record of optimizing system performance and cost
Posted 1 week ago
175.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express Team Overview: Global Credit & Model Risk Oversight, Transaction Monitoring & GRC Capabilities (CMRC) provides independent challenge and ensures that significant Credit and Model risks are properly evaluated and monitored, and Anti-Money Laundering (AML) risks are mitigated through the transaction monitoring program. In addition, CMRC hosts the central product organization responsible for the ongoing maintenance and modernization of GRC platforms and capabilities. How will you make an impact in this role? The AML Data Capabilities team was established with a mission to own and govern data across products – raw data, derivations, organized views to cater for analytics and production use cases and to manage the end-to-end data quality. This team comprises of risk data experts with deep SME knowledge of risk data, systems and processes covering all aspects of customer life cycle. Our mission is to build and support Anti-Money Laundering Transaction Monitoring data and rule needs in collaboration with Strategy and technology partners with focus on our core tenets of Timeliness , Quality and process efficiency. Responsibilities include: · Develop and Maintain Organized Data Layers to cater for both Production use cases and Analytics for Transaction Monitoring of Anti-Money Laundering rules. · Manage end to end Big Data Integration processes for building key variables from disparate source systems with 100% accuracy and 100% on time delivery · Partner closely with Strategy and Modeling teams in building incremental intelligence, with strong emphasis on maintaining globalization and standardization of attribute calculations across portfolios. · Partner with Tech teams in designing and building next generation data quality controls. · Drive automation initiatives within existing processes and fully optimize delivery effort and processing time · Effectively manage relationship with stakeholders across multiple geographies · Contribute into evaluating and/or developing right tools, common components, and capabilities · Follow industry best agile practices to deliver on key priorities Implementation of defined rules on Lucy platform in order to identify the AML alerts. · Ensuring process and actions are logged and support regulatory reporting, documenting the analysis and the rule build in form of qualitative document for relevant stakeholders. Minimum Qualifications · Academic Background: Bachelor’s degree with up to 2 year of relevant work experience · Strong Hive, SQL skills, knowledge of Big data and related technologies · Hands on experience on Hadoop & Shell Scripting is a plus · Understanding of Data Architecture & Data Engineering concepts · Strong verbal and written communication skills, with the ability to cater to versatile technical and non-technical audience · Willingness to Collaborate with Cross-Functional teams to drive validation and project execution · Good to have skills - Python / Py-Spark · Excellent Analytical & critical thinking with attention to detail · Excellent planning and organizations skills including ability to manage inter-dependencies and execute under stringent deadlines · Exceptional drive and commitment; ability to work and thrive in in fast changing, results driven environment; and proven ability in handling competing priorities Behavioral Skills/Capabilities: Enterprise Leadership Behaviors Set the Agenda: Ø Ability to apply thought leadership and come up with ideas Ø Take complete perspective into picture while designing solutions Ø Use market best practices to design solutions Bring Others with You: Ø Collaborate with multiple stakeholders and other scrum team to deliver on promise Ø Learn from peers and leaders Ø Coach and help peers Do It the Right Way: Ø Communicate Effectively Ø Be candid and clear in communications Ø Make Decisions Quickly & Effectively Ø Live the company culture and values We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
AI/ML Engineer Responsibilities: - Design, build, and deploy advanced machine learning models and algorithms to solve complex problems in various domains, such as natural language processing, computer vision, and predictive analytics. - Collaborate closely with cross-functional teams, including data scientists, engineers, and product managers, to identify opportunities for leveraging company data to improve business outcomes. - Perform data mining and feature extraction using advanced statistical and machine learning techniques. - Evaluate and validate the performance of machine learning models using appropriate metrics and techniques. - Optimize and fine-tune machine learning models for optimal performance in production environments. - Stay up-to-date with the latest advancements in machine learning and artificial intelligence research. - Contribute to the development of best practices in machine learning and share knowledge with peers and junior team members. - Provide technical leadership and mentoring to other team members as needed. Requirements: - 4+ years of experience in developing and deploying machine learning models in a professional setting. - Strong understanding of various machine learning algorithms, such as linear regression, logistic regression, decision trees, SVM, neural networks, ensemble methods, and reinforcement learning. - Demonstrated experience in working with large datasets and using big data technologies, such as Hadoop, Spark, and distributed computing. - Proficient in programming languages, such as Python. - Experience with machine learning frameworks and libraries, such as TensorFlow, Keras, PyTorch, or Scikit-learn. - Familiarity with data visualization tools, such as Tableau, Matplotlib, or D3.js. Preferred Qualifications: - Experience in working with cloud platforms, such as AWS, Google Cloud, or Azure. - Knowledge of natural language processing, computer vision, or deep learning techniques. - Experience developing end-to-end machine learning pipelines, from data gathering and preprocessing to model deployment and monitoring.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Cloud and AWS Expertise: In-depth knowledge of AWS services related to data engineering: EC2, S3, RDS, DynamoDB, Redshift, Glue, Lambda, Step Functions, Kinesis, Iceberg, EMR, and Athena. Strong understanding of cloud architecture and best practices for high availability and fault tolerance. Data Engineering Concepts : Expertise in ETL/ELT processes, data modeling, and data warehousing. Knowledge of data lakes, data warehouses, and big data processing frameworks like Apache Hadoop and Spark. Proficiency in handling structured and unstructured data. Programming and Scripting: Proficiency in Python, Pyspark and SQL for data manipulation and pipeline development. Expertise in working with data warehousing solutions like Redshift.
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead all phases of data engineering, including requirements analysis, data modeling, pipeline design, development, and testing Design and implement performance and operational enhancements for scalable data systems Develop reusable data components, frameworks, and patterns to accelerate team productivity and innovation Conduct code reviews and provide feedback aligned with data engineering best practices and performance optimization Ensure data solutions meet standards for quality, scalability, security, and maintainability through rigorous design and code reviews Actively participate in Agile/Scrum ceremonies to deliver high-quality data solutions Collaborate with software engineers, data analysts, and business stakeholders across Agile teams Troubleshoot and resolve production issues post-deployment, designing robust solutions as needed Design, develop, test, and document data pipelines and ETL processes, enhancing existing components to meet evolving business needs Partner with architecture teams to drive forward-thinking data platform solutions Contribute to the design and architecture of secure, scalable, and maintainable data systems, clearly communicating design decisions to technical leadership Mentor junior engineers and collaborate on solution design with team members and product owners Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s degree or equivalent experience Hands-on experience with cloud data services (AWS, Azure, or GCP) Experience building and maintaining ETL/ELT pipelines in enterprise environments Experience integrating with RESTful APIs Experience with Agile methodologies (Scrum, Kanban) Knowledge of data governance, security, privacy, and vulnerability management Understanding of authorization protocols (OAuth) and API integration Solid proficiency in SQL, NoSQL, and data modeling Proficiency with open-source tools such as Apache Flink, Iceberg, Spark, and PySpark Advanced Python skills for data engineering and data science (beyond Jupyter notebooks) Familiarity with big data technologies such as Spark, Hadoop, and Databricks Ability to build modular, testable, and reusable data solutions Solid grasp of data engineering concepts including: Data Catalogs Data Warehouses Data Lakes (especially Iceberg) Data Dictionaries Preferred Qualifications Experience with GitHub, Terraform, and GitHub Actions Experience with real-time data streaming (Kafka, Kinesis) Experience with feature engineering and machine learning pipelines (MLOps) Knowledge of data warehousing platforms (Snowflake, Redshift, BigQuery) Familiarity with AWS native data engineering tools: Lambda, Lake Formation, Kinesis (Firehose, Data Streams) Glue (Data Catalog, ETL, Streaming) SageMaker, Athena, Redshift (including Spectrum) Demonstrated ability to mentor and guide junior engineers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 week ago
5.5 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About the Company KPMG in India is a leading professional services firm established in August 1993. The firm offers a wide range of services, including audit, tax, and advisory, to national and international clients across various sectors. KPMG operates from offices in 14 cities, including Mumbai, Bengaluru, Chennai, and Delhi. KPMG India is known for its rapid, performance-based, industry-focused, and technology-enabled services. The firm leverages its global network to provide informed and timely business advice, helping clients mitigate risks and seize opportunities. KPMG India is committed to quality and excellence, fostering a culture of growth, innovation, and collaboration. About the job: Spark/Scala Developer Experience: 5.5 to 9 years Location: Mumbai We are seeking a skilled Spark/Scala Developer with 5.5 - 9 years of experience in Big Data engineering. The ideal candidate will have strong expertise in Scala programming, SQL, and data processing using Apache Spark within Hadoop ecosystems. Key Responsibilities: Design, develop, and implement data ingestion and processing solutions for batch and streaming workloads using Scala and Apache Spark. Optimize and debug Spark jobs for performance and reliability. Translate functional requirements and user stories into scalable technical solutions. Develop and troubleshoot complex SQL queries to extract business-critical insights. Required Skills: 2+ years of hands-on experience in Scala programming and SQL. Proven experience with Hadoop Data Lake and Big Data tools. Strong understanding of Spark job optimization and performance tuning. Ability to work collaboratively in an Agile environment. Equal Opportunity Statement KPMG India has a policy of providing equal opportunity for all applicants and employees regardless of their color, caste, religion, age, sex/gender, national origin, citizenship, sexual orientation, gender identity or expression, disability or other legally protected status. KPMG India values diversity and we request you to submit the details below to support us in our endeavor for diversity. Providing the below information is voluntary and refusal to submit such information will not be prejudicial to you.
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are hiring for one the IT product-based company Job Title: - Senior Data Engineer Exp-5+ years Location: - Gurgaon/Pune Work Mode: - Hybrid Skills: - Azure and Databricks Programming Language- Python, Powershell, .Net/Java are plus What you will do Participate in design and developed highly performing and scalble large-scale Data and Analytics products Participate in requirements grooming, analysis and design discussions with fellow developers, architects and product analysts Participate in product planning by providing estimates on user stories Participate in daily standup meeting and proactively provide status on tasks Develop high-quality code according to business and technical requirements as defined in user stories Write unit tests that will improve the quality of your code Review code for defects and validate implementation details against user stories Work with quality assurance analysts who build test cases that validate your work Demo your solutions to product owners and other stakeholders Work with other Data and Analytics development teams to maintain consistency across the products by following standards and best software development practices Provide third tier support for our product suite What you will bring 3+ years of Data Engineering and Analytics experience 2+ years of Azure and Databricks (or Apache Sparks, Hadoop and Hive) working experience Knowledge and application of the following technical skills: T-SQL/PL-SQL, PySpark, Azure Data Factory, Databricks (or Apache Sparks, Hadoop and Hive), and Power BI or equivalent Business Intelligence tools Understanding of dimension modeling and Data Warehouse concepts Programming skills such as Python, PowerShell, .Net/Java are plus Git repository experience and thorough understanding of branching and merging strategies. 2 years' experience developing in Agile Software Development Life Cycle and Scrum methodology Strong planning, and time management skills Advanced problem-solving skills and data driven Excellent written and oral communication skills Team player who fosters an environment of shared success, is passionate about always learning and improving, self-motivated, open minded, and creative What we would like to see Bachelor's degree in computer science or related field Healthcare knowledge is a plus
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
The key focus for the senior data architect is to perform planning aligned to key data solutions, build and participate the architecture capability building, performdata architecture and design, manage data architecture risk and compliance, provide design and build governance and support and communicate and share knowledge around the architecture practices, guardrails, blueprints and standards related to the data solution design. Describe The Main Activities Of The Job (description) Planning Lead data solution requirements gathering and ensure alignment with business objectives and constraints Define and refine data architecture runways for intentional architecture with the key stakeholders Provide input into business cases and costing Participate and provide data architectural runway requirements into Programme Increment (PI) Planning Architecture Capability Develop and oversee data architecture views and ensure alignment with enterprise architecture Maintain and oversee the data solution artifacts in the set enterprise repository and knowledge portals aligned to the rest of the architecture Manage the data architecture processes based on the requirements for each architype Manage change impact of the data architecture with stakeholders Develop and participate in the build of the data architecture practice with embedded architects and engineers including the relevant methods, repository and tools Manage the data architecture considering the business, application, information/data and technology viewpoints Establish, enforce and implement data standards, guardrails, frameworks, and patterns Solution Design Lead and review logical and detailed data architecture Evaluate and approve data solution options and technology selections Select appropriate technology, tools and build for the solution Oversee and maintain the data solution blueprints Drive incremental modernisation initiatives in the delivery area Risk, Governance and Compliance Identify, assess and mitigate risks at a data solution architecture level Ensure and enforce compliance with policies, standards, and regulations Lead data architecture reviews and integrate with governance functions Integrate with other governance and compliance functions to ensure continuity in managing the investment and risk for the organisation pertaining to the solution architectures Establish and provide data standards, guidance, and tools to delivery teams Implementation and Collaboration Establish and provide data solution architectures and tools to thedelivery and data engineering teams Lead and facilitate collaboration with delivery teams to achieve architecture objectives Manage and resolve deviations and ensure up-to-date data solution design documentation Identify opportunities to optimise delivery of solutions Oversee and conduct post-implementation reviews Ensure the data architecture supports CI/CD pipelines to facilitate rapid and reliable deployment of data solutions Implement automated testing frameworks for data solutions to ensure quality and reliability throughout the development lifecycle Establish performance monitoring and optimisation practices to ensure data solutions meet performance benchmarks and can scale as needed Integrate robust data security measures, including encryption, access controls, and regular security audits, into the implementation process Communication and Knowledge Sharing Communicate and advocate up-to-date data solution architecture views Communicate the relevant data standards, practices, guardrails and tools to stakeholders relevant to the solution design Ensure IT teams are well-informed and trained in architecture requirements Communicate and collaborate with stakeholders' relevant views on planning, technology assessments, risk, compliance, governance and implementation assessments Foster collaboration between data architects, data engineers, and other IT teams through regular cross-functional meetings and agile ceremonies Communicate and maintain up-to-date blueprint designs for key data solutions Ensure effective participation in the agile ceremonies (PI planning, sprint planning, retrospectives, demos) Implement regular feedback loops with stakeholders and end-users to continuously improve data solutions based on real-world usage and requirements Create a culture of knowledge sharing by organising regular workshops, training sessions, and documentation updates to keep all team members informed about the latest data architecture practices and tools Minimum Qualifications/Experience (required For The Job) Matric Degree or diploma in Information Technology, Computer Science, Engineering OR relevant diploma / degree Experience:Requires a minimum of 5 years in a technical/solution design role and a minimum of 7 years relevant IT experience Data Experience: Required a minimum of 7 years related experience in data engineering, data modeling and design and data management and governance Data Related Experience: Big Data and Analytics (e.g., Hadoop, Spark) Data Warehousing (e.g., DataBricks, Snowflake, Redshift) Master Data Management (MDM) Data Lakes and Data Mesh Metadata Management ETL/ELT Processes Data Privacy and Compliance Cloud Data Services Additional Qualifications/Experience (preferred) DAMA-DMBOK TOGAF ArchiMate Cloud Certifications (AWS, Azure) Financial Industry Experience Competencies Required Related attributes and competencies related to architecture: Critical thinking/problem solving Teamwork/collaboration Effective Communication Skills Leadership skills Knowledge and experience in architecture domains Knowledge and experience in architecture methods, frameworks and tools Solution Design Experience Agile Knowledge and Experience Cloud Knowledge and Experience Data related competencies: Data modeling, database design and data governance best practices and implementation Data architecture principles and methodologies Data integration technologies and tools Data management and governance
Posted 1 week ago
3.0 years
0 Lacs
India
Remote
Are you a talented Data Scientist (includes AI/ML Researcher, AI/ML Engineer, Data Engineer, ML Ops Engineer, QA Engineer with AI/ML focus, NLP Engineer, LLM Engineer) either, Looking for your next big challenge working remotely OR Employed , but open to offers from elite US companies to work remotely? Submit your resume to our GlobalPros.ai’s, an exclusive community of the world’s top pre-vetted developers dedicated to precisely matching you with our US employers. Globlpros.ai is followed internationally by over 13,000 employers, agencies and the world’s top developers. We are currently searching for a full-time AI/ML developer (includes AI/ML Researcher, AI/ML Engineer, Data Engineer, Data Scientist, ML Ops Engineer, QA Engineer with AI/ML focus, NLP Engineer) to work remotely for our US employer clients. What We Offer: Competitive Compensation . Compensation is negotiable and commensurate with your experience and expertise. Pre-vetting so you’re 2x more likely to be hired . Recent studies by Indeed and LinkedIn show pre-vetted candidates like you are twice as likely to be hired. Shortlist competitive advantage . Our machine learning technology matches you precisely to job requirements and because your pre-vetted ensures you're shortlisted ahead of other candidates. Personalized career support . Free one-on-one career counseling and interview prep to help guarantee you succeed. Anonymity . If you’re employed but open to offers, your profile is anonymous and is not available on our website or otherwise online. When matched with our clients, your profile is anonymous until you agree to be interviewed. So there’s no risk in submitting your resume now. We're Looking For: Experience . Must have at least 3 years of experience . Role . AI/ML developer (includes AI/ML Researcher, AI/ML Engineer, Data Engineer, Data Scientist, ML Ops Engineer, QA Engineer with AI/ML focus, NLP Engineer) Skills . TensorFlow, PyTorch, Scikit-learn Python, Java, C++, R, AWS, Azure, GCP, (SQL, NoSQL, Hadoop, Spark, Docker, Kubernetes, AWS Redshift, Google BigQuery. Willing to work full-time . (40 hours per week) . Available for an hour of assessment testing . Being deeply-vetted with a data enhanced resume and matched precisely by our machine learning algorithms substantially increases the probability of being hired quickly, at higher compensation levels over unvetted candidates. It's your substantial competitive advantage in a crowded job market.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15459 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France