Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 years
0 Lacs
Chandigarh, Chandigarh
On-site
Role: Big Data Engineer (Fresher) Experience: 0–1 Years Location: Chandigarh Responsibilities As an entry-level Big Data Engineer, you will work closely with experienced team members to help design, build, and maintain high-performance data solutions. You will assist in developing scalable pipelines, Spark-based processing jobs, and contribute to RESTful services that support data-driven products. This is a hands-on learning opportunity where you will be mentored and exposed to real-world Big Data technologies, DevOps practices, and collaborative agile teams. Your key responsibilities will include: Assisting in the design and development of data pipelines and streaming applications. Learning to work with distributed systems and Big Data frameworks. Supporting senior engineers in writing and testing code for data processing. Participating in code reviews, team discussions, and product planning sessions. Collaborating with cross-functional teams including product managers and QA. Qualifications and Skills Bachelor's degree in Computer Science, Engineering, or related field. Good understanding of core programming concepts (Java, Python, or Scala preferred). Familiarity with SQL and NoSQL databases. Basic knowledge of Big Data tools such as Spark, Hadoop, Kafka (academic/project exposure acceptable). Exposure to Linux/Unix environments. Awareness of Agile methodologies (Scrum, Kanban) and DevOps tools like Git. Curiosity to learn cloud platforms like AWS or GCP (certifications a plus). Willingness to learn about system security (Kerberos, TLS, etc.). Nice to Have (Not Mandatory): Internships, academic projects, or certifications related to Big Data. Contributions to open-source or personal GitHub projects. Familiarity with containerization (Docker, Kubernetes) or CI/CD tools. Job Types: Full-time, Permanent Pay: Up to ₹331,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Rotational shift Supplemental Pay: Performance bonus Work Location: In person
Posted 1 week ago
6.0 - 8.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Qualification 6-8 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Good to have: Role Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Experience 6 to 8 years Job Reference Number 13024
Posted 1 week ago
12.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Qualification BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Role Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience 10 to 18 years Job Reference Number 12895
Posted 1 week ago
0 years
0 Lacs
Ballabgarh, Haryana, India
On-site
Revenir aux offres Stagiaire Business Analyst Data (LOB25-STA-06) Nature Data Business Analyst Contrat Stage 6 mois Expérience Moins d'1 an Lieu de travail Paris / Région parisienne A Propos Missions Le stage s’inscrit dans le cadre de la mise en place d’un SI d’envergure pour la collecte et l’utilisation des données Sociales Nominatives (DSN) pour un organisme du Secteur Public. Nées d’une décision politique pour la simplification des relations entre les entreprises et les organismes sociaux, la Déclaration Sociale Nominative est désormais largement rependue et utilisée par la majorité des entreprises et remplace la majorité des déclarations sociales françaises périodiques ou événementielles. Les DSN embarquent une richesse métier importante ainsi qu’une volumétrie très conséquente, avec des usages très nombreux : interrogation de données en temps réel pour des actions telles que le contrôle des entreprise, le calcul de données telles que les effectifs et la masse salariale ou l’analyse statistique. Face à la richesse de ces données, cet organisme a lancé un important projet de refonte de sa brique SI de collecte et d’utilisation des DSN dans une architecture BIG DATA. Sous la responsabilité d’un Product Owner, vous serez intégré dans une équipe de Business Analyst de 7 personnes et vous interviendrez sur la définition et la validation des sprint et des livraisons des Data Engineer. Dans ce cadre, vous serez formé et encadré sur les méthodologies de mise en œuvre de solution DATA. Descriptif du poste Travaux Assurés Montée en compétence fonctionnelle sur les données de la DSN afin d’appréhender les enjeux du projet, le périmètre de données et les cas d’usage afférents Apprentissage de la méthodologie agile (Scrum) Participation aux travaux de spécifications et de validation des sprints, avec un enjeu important sur l’automatisation des tests et les tests de non régression. Dans cette optique, le stagiaire sera amené à mettre en place des programmes d’automatisation qui nécessiteront quelques développements. Le stage s’adresse donc à un profil désireux d’intervenir dans un cadre technico-fonctionnel. Participation aux cérémonies agiles et aux travaux de pilotage Vous bénéficierez de toute l’expertise de LOBELLIA Conseil sur le volet métier et sur la conduite de projet agiles. Ce Stage Vous Permettra D’acquérir La vision architecturale d’un système BIG DATA d’envergure Un cas pratique de compréhension et d’utilisation de données d’envergure Une vision de la démarche d’un projet DATA multi-équipe en mode agile Les technologies utilisées sur les différents sujets sont : Suite Hadoop (Hdfs, Oozie, Yarn, Spark, Hive) Accès aux données : MobaXterm, Zeppelin, MIT Kerberos, DBeaver Langage de programmation : HQL (simili SQL) + Python Outils de travail : Sharepoint, Redmine, Git, Visual Studio Code, Excel Profil recherché Etudiant en dernière année d’école d’ingénieur ou Master 2 scientifique. Qualités requises : Appétence technico-fonctionnelle Qualités rédactionnelles Esprit d’analyse Rigueur Sens du service Aisance relationnelle Postuler Ce champs est requis. Ce champs est requis. Ce mail n'est pas valide. CV ** Ce champs est requis. Lettre de motivation Vous nous avez connus par... Les réseaux sociaux Un forum ou un événement école Une connaissance Autre Champs requis Fichier requis, au format pdf, poids inférieur à 5Mo Merci, votre mail a été envoyé.
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Qualification OLAP, Data Engineering, Data warehousing, ETL Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake Experience in writing and troubleshooting SQL programming or MDX queries Experience of working on Linux Experience in Microsoft Analysis services (SSAS) or OLAP tools Tableau or Micro strategy or any BI tools Expertise of programming in Python, Java or Shell Script would be a plus Role Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for prospects regarding technical issues during POV stage. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience 3 to 6 years Job Reference Number 10350
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Qualification Required Proven hands-on experience on designing, developing and supporting Database projects for analysis in a demanding environment. Proficient in database design techniques – relational and dimension designs Experience and a strong understanding of business analysis techniques used. High proficiency in the use of SQL or MDX queries. Ability to manage multiple maintenance, enhancement and project related tasks. Ability to work independently on multiple assignments and to work collaboratively within a team is required. Strong communication skills with both internal team members and external business stakeholders Added Advanatage Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake will be an added advantage. Experience of working on Linux system Experience of Tableau or Micro strategy or Power BI or any BI tools will be an added advantage. Expertise of programming in Python, Java or Shell Script would be a plus Role Roles & Responsibilities Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for customers regarding technical issues during the project. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience 5 to 10 years Job Reference Number 11078
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Qualification Pre-Sales Solution Engineer - India Experience Areas Or Skills Pre-Sales experience of Software or analytics products Excellent verbal & written communication skills OLAP tools or Microsoft Analysis services (MSAS) Data engineering or Data warehouse or ETL Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Tableau or Micro strategy or any BI tool Hive QL or Spark SQL or PLSQL or TSQL Writing and troubleshooting SQL programming or MDX queries Working on Linux programming in Python, Java or Java Script would be a plus Filling RFP or Questioner from Customer NDA, Success Criteria, Project closure and other Documentation Be willing to travel or relocate as per requirement Role Acts as main point of contact for Customer contacts involved in the evaluation process Product demonstrations to qualified leads Product demonstrations in support of marketing activity such as events or webinars Own RFP, NDA, PoC success criteria document, POC Closure and other documents Secures alignment on Process and documents with the customer / prospect Owns the technical win phases of all active opportunities Understand Customer domain and database schema Providing OLAP and Reporting solution Work closely with customers for understanding and resolving environment or OLAP cube or reporting related issues Co-ordinate with solutioning team for execution of PoC as per success plan Creates enhancement requests or identify requests for new features on behalf of customers or hot prospects Experience 3 to 6 years Job Reference Number 10771
Posted 1 week ago
8.0 - 10.0 years
30 - 32 Lacs
Hyderabad
Work from Office
Candidate Specifications: Candidate should have 9+ years of experience. Candidates should have 9+ years of experience in Python and Pyspark Candidate should have strong experience in AWS and PLSQL. Candidates should be strong in Data management with data governance and data streaming along with data lakes and data-warehouse Candidates should also have exposure in Team handling and stakeholder management skills. Candidate should have excellent in written and verbal communication skills. Contact Person: Sheena Rakesh
Posted 1 week ago
15.0 years
0 Lacs
Greater Lucknow Area
On-site
Qualification 15+ years of experience in the role of managing and implementing of high-end software products. Expertise in Java/ J2EE or EDW/SQL OR Hadoop/Hive/Spark and preferably hands-on. Good knowledge* of any of the Cloud (AWS/Azure/GCP) – Must Have Managed/ delivered and implemented complex projects dealing with considerable data size (TB/ PB) and with high complexity Experience in handling migration projects Good To Have Data Ingestion, Processing and Orchestration knowledge Role Senior Technical Project Managers (STPMs) are in charge of handling all aspects of technical projects. This is a multi-dimensional and multi-functional role. You will need to be comfortable reporting program status to executives, as well as diving deep into technical discussions with internal engineering teams and external partners. You should collaborate with, and leverage, colleagues in business development, product management, analytics, marketing, engineering, and partner organizations. You have to manage multiple projects and ensures all releases on time. You are responsible for manage and deliver the technical solution to support an organization’s vision and strategic direction. The technology program manager delivers the technical solution to support an organization’s vision and strategic direction. You should be capable to working with a different type of customer and should possess good customer handling skills. Experience in working in ODC model and capable of presenting the Technical Design and Architecture to Senior Technical stakeholders. Should have experience in defining the project and delivery plan for each assignment Capable of doing resource allocations as per the requirements for each assignment Should have experience of driving RFPs. Should have experience of Account management – Revenue Forecasting, Invoicing, SOW creation etc. Experience 15 to 20 years Job Reference Number 13010
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Description ZiffyHealth is a BHARAT-focused IoT-enabled AI-driven health-tech platform, striving to bridge the disparity in healthcare professional availability between urban and rural India. Based on a Big-data Hadoop ecosystem, ZiffyHealth provides a 360° integrated healthcare platform supported by the Atal Innovation Mission, NITI Aayog, Government of India. Our mission is to make healthcare more accessible and affordable using cutting-edge technology and AI-powered processes. We aim to create a world where everyone can lead a healthy and productive life. Role Description This is a full-time, on-site role for a Telesales Representative located in Pune. The Telesales Representative will be responsible for making outbound sales calls, providing customer support, managing customer inquiries, and delivering excellent customer service. The representative will also assist in training new team members and contribute to achieving sales targets. Qualifications Strong Communication skills Customer Service and Customer Support experience Sales skills Experience in Training team members Excellent interpersonal and problem-solving abilities Ability to work in a fast-paced environment Experience in the healthcare industry is a plus Bachelor's degree in Business Administration, Marketing, or a related field
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Backend & MLOps Engineer – Integration, API, and Infrastructure Expert 1. Role Objective: Responsible for building robust backend infrastructure, managing ML operations, and creating scalable APIs for AI applications. Must excel in deploying and maintaining AI products in production environments with high availability and security standards. The engineer will be expected to build secure, scalable backend systems that integrate AI models into services (REST, gRPC), manage data pipelines, enable model versioning, and deploy containerized applications in secure (air-gapped) Naval infrastructure. 2. Key Responsibilities: 2.1. Create RESTful and/or gRPC APIs for model services. 2.2. Containerize AI applications and maintain Kubernetes-compatible Docker images. 2.3. Develop CI/CD pipelines for model training and deployment. 2.4. Integrate models as microservices using TorchServe, Triton, or FastAPI. 2.5. Implement observability (metrics, logs, alerts) for deployed AI pipelines. 2.6. Build secured data ingestion and processing workflows (ETL/ELT). 2.7. Optimize deployments for CPU/GPU performance, power efficiency, and memory usage 3. Educational Qualifications Essential Requirements: 3.1. B.Tech/ M.Tech in Computer Science, Information Technology, or Software Engineering. 3.2. Strong foundation in distributed systems, databases, and cloud computing. 3.3. Minimum 70% marks or 7.5 CGPA in relevant disciplines. Professional Certifications: 3.4. AWS Solutions Architect/DevOps Engineer Professional 3.5. Google Cloud Professional ML Engineer or DevOps Engineer 3.6. Azure AI Engineer or DevOps Engineer Expert. 3.7. Kubernetes Administrator (CKA) or Developer (CKAD). 3.8. Docker Certified Associate Core Skills & Tools 4. Backend Development: 4.1. Languages: Python, FastAPI, Flask, Go, Java, Node.js, Rust (for performance-critical components) 4.2. Web Frameworks: FastAPI, Django, Flask, Spring Boot, Express.js. 4.3. API Development: RESTful APIs, GraphQL, gRPC, WebSocket connections. 4.4. Authentication & Security: OAuth 2.0, JWT, API rate limiting, encryption protocols. 5. MLOps & Model Management: 5.1. ML Platforms: MLflow, Kubeflow, Apache Airflow, Prefect 5.2. Model Serving: TensorFlow Serving, TorchServe, ONNX Runtime, NVIDIA Triton, BentoML 5.3. Experiment Tracking: Weights & Biases, Neptune, ClearML 5.4. Feature Stores: Feast, Tecton, Amazon SageMaker Feature Store 5.5. Model Monitoring: Evidently AI, Arize, Fiddler, custom monitoring solutions 6. Infrastructure & DevOps: 6.1. Containerization: Docker, Podman, container optimization. 6.2. Orchestration: Kubernetes, Docker Swarm, OpenShift. 6.3. Cloud Platforms: AWS, Google Cloud, Azure (multi-cloud expertise preferred). 6.4. Infrastructure as Code: Terraform, CloudFormation, Pulumi, Ansible. 6.5. CI/CD: Jenkins, GitLab CI, GitHub Actions, ArgoCD. 6.6. DevOps & Infra: Docker, Kubernetes, NGINX, GitHub Actions, Jenkins. 7. Database & Storage: 7.1. Relational: PostgreSQL, MySQL, Oracle (for enterprise applications) 7.2. NoSQL: MongoDB, Cassandra, Redis, Elasticsearch 7.3. Vector Databases: Pinecone, Weaviate, Chroma, Milvus 7.4. Data Lakes: Apache Spark, Hadoop, Delta Lake, Apache Iceberg 7.5. Object Storage: AWS S3, Google Cloud Storage, MinIO 7.6. Backend: Python (FastAPI, Flask), Node.js (optional) 7.7. DevOps & Infra: Docker, Kubernetes, NGINX, GitHub Actions, Jenkins 8. Secure Deployment: 8.1. Military-grade security protocols and compliance 8.2. Air-gapped deployment capabilities 8.3. Encrypted data transmission and storage 8.4. Role-based access control (RBAC) & IDAM integration 8.5. Audit logging and compliance reporting 9. Edge Computing: 9.1. Deployment on naval vessels with air gapped connectivity. 9.2. Optimization of applications for resource-constrained environment. 10. High Availability Systems: 10.1. Mission-critical system design with 99.9% uptime. 10.2. Disaster recovery and backup strategies. 10.3. Load balancing and auto-scaling. 10.4. Failover mechanisms for critical operations. 11. Cross-Compatibility Requirements: 11.1. Define and expose APIs in a documented, frontend-consumable format (Swagger/OpenAPI). 11.2. Develop model loaders for AI Engineer's ONNX/ serialized models. 11.3. Provide UI developers with test environments, mock data, and endpoints. 11.4. Support frontend debugging, edge deployment bundling, and user role enforcement. 12. Experience Requirements 12.1. Production experience with cloud platforms and containerization. 12.2. Experience building and maintaining APIs serving millions of requests. 12.3. Knowledge of database optimization and performance tuning. 12.4. Experience with monitoring and alerting systems. 12.5. Architected and deployed large-scale distributed systems. 12.6. Led infrastructure migration or modernization projects. 12.7. Experience with multi-region deployments and disaster recovery. 12.8. Track record of optimizing system performance and cost
Posted 1 week ago
175.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express Team Overview: Global Credit & Model Risk Oversight, Transaction Monitoring & GRC Capabilities (CMRC) provides independent challenge and ensures that significant Credit and Model risks are properly evaluated and monitored, and Anti-Money Laundering (AML) risks are mitigated through the transaction monitoring program. In addition, CMRC hosts the central product organization responsible for the ongoing maintenance and modernization of GRC platforms and capabilities. How will you make an impact in this role? The AML Data Capabilities team was established with a mission to own and govern data across products – raw data, derivations, organized views to cater for analytics and production use cases and to manage the end-to-end data quality. This team comprises of risk data experts with deep SME knowledge of risk data, systems and processes covering all aspects of customer life cycle. Our mission is to build and support Anti-Money Laundering Transaction Monitoring data and rule needs in collaboration with Strategy and technology partners with focus on our core tenets of Timeliness , Quality and process efficiency. Responsibilities include: · Develop and Maintain Organized Data Layers to cater for both Production use cases and Analytics for Transaction Monitoring of Anti-Money Laundering rules. · Manage end to end Big Data Integration processes for building key variables from disparate source systems with 100% accuracy and 100% on time delivery · Partner closely with Strategy and Modeling teams in building incremental intelligence, with strong emphasis on maintaining globalization and standardization of attribute calculations across portfolios. · Partner with Tech teams in designing and building next generation data quality controls. · Drive automation initiatives within existing processes and fully optimize delivery effort and processing time · Effectively manage relationship with stakeholders across multiple geographies · Contribute into evaluating and/or developing right tools, common components, and capabilities · Follow industry best agile practices to deliver on key priorities Implementation of defined rules on Lucy platform in order to identify the AML alerts. · Ensuring process and actions are logged and support regulatory reporting, documenting the analysis and the rule build in form of qualitative document for relevant stakeholders. Minimum Qualifications · Academic Background: Bachelor’s degree with up to 2 year of relevant work experience · Strong Hive, SQL skills, knowledge of Big data and related technologies · Hands on experience on Hadoop & Shell Scripting is a plus · Understanding of Data Architecture & Data Engineering concepts · Strong verbal and written communication skills, with the ability to cater to versatile technical and non-technical audience · Willingness to Collaborate with Cross-Functional teams to drive validation and project execution · Good to have skills - Python / Py-Spark · Excellent Analytical & critical thinking with attention to detail · Excellent planning and organizations skills including ability to manage inter-dependencies and execute under stringent deadlines · Exceptional drive and commitment; ability to work and thrive in in fast changing, results driven environment; and proven ability in handling competing priorities Behavioral Skills/Capabilities: Enterprise Leadership Behaviors Set the Agenda: Ø Ability to apply thought leadership and come up with ideas Ø Take complete perspective into picture while designing solutions Ø Use market best practices to design solutions Bring Others with You: Ø Collaborate with multiple stakeholders and other scrum team to deliver on promise Ø Learn from peers and leaders Ø Coach and help peers Do It the Right Way: Ø Communicate Effectively Ø Be candid and clear in communications Ø Make Decisions Quickly & Effectively Ø Live the company culture and values We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
AI/ML Engineer Responsibilities: - Design, build, and deploy advanced machine learning models and algorithms to solve complex problems in various domains, such as natural language processing, computer vision, and predictive analytics. - Collaborate closely with cross-functional teams, including data scientists, engineers, and product managers, to identify opportunities for leveraging company data to improve business outcomes. - Perform data mining and feature extraction using advanced statistical and machine learning techniques. - Evaluate and validate the performance of machine learning models using appropriate metrics and techniques. - Optimize and fine-tune machine learning models for optimal performance in production environments. - Stay up-to-date with the latest advancements in machine learning and artificial intelligence research. - Contribute to the development of best practices in machine learning and share knowledge with peers and junior team members. - Provide technical leadership and mentoring to other team members as needed. Requirements: - 4+ years of experience in developing and deploying machine learning models in a professional setting. - Strong understanding of various machine learning algorithms, such as linear regression, logistic regression, decision trees, SVM, neural networks, ensemble methods, and reinforcement learning. - Demonstrated experience in working with large datasets and using big data technologies, such as Hadoop, Spark, and distributed computing. - Proficient in programming languages, such as Python. - Experience with machine learning frameworks and libraries, such as TensorFlow, Keras, PyTorch, or Scikit-learn. - Familiarity with data visualization tools, such as Tableau, Matplotlib, or D3.js. Preferred Qualifications: - Experience in working with cloud platforms, such as AWS, Google Cloud, or Azure. - Knowledge of natural language processing, computer vision, or deep learning techniques. - Experience developing end-to-end machine learning pipelines, from data gathering and preprocessing to model deployment and monitoring.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Cloud and AWS Expertise: In-depth knowledge of AWS services related to data engineering: EC2, S3, RDS, DynamoDB, Redshift, Glue, Lambda, Step Functions, Kinesis, Iceberg, EMR, and Athena. Strong understanding of cloud architecture and best practices for high availability and fault tolerance. Data Engineering Concepts : Expertise in ETL/ELT processes, data modeling, and data warehousing. Knowledge of data lakes, data warehouses, and big data processing frameworks like Apache Hadoop and Spark. Proficiency in handling structured and unstructured data. Programming and Scripting: Proficiency in Python, Pyspark and SQL for data manipulation and pipeline development. Expertise in working with data warehousing solutions like Redshift.
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead all phases of data engineering, including requirements analysis, data modeling, pipeline design, development, and testing Design and implement performance and operational enhancements for scalable data systems Develop reusable data components, frameworks, and patterns to accelerate team productivity and innovation Conduct code reviews and provide feedback aligned with data engineering best practices and performance optimization Ensure data solutions meet standards for quality, scalability, security, and maintainability through rigorous design and code reviews Actively participate in Agile/Scrum ceremonies to deliver high-quality data solutions Collaborate with software engineers, data analysts, and business stakeholders across Agile teams Troubleshoot and resolve production issues post-deployment, designing robust solutions as needed Design, develop, test, and document data pipelines and ETL processes, enhancing existing components to meet evolving business needs Partner with architecture teams to drive forward-thinking data platform solutions Contribute to the design and architecture of secure, scalable, and maintainable data systems, clearly communicating design decisions to technical leadership Mentor junior engineers and collaborate on solution design with team members and product owners Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s degree or equivalent experience Hands-on experience with cloud data services (AWS, Azure, or GCP) Experience building and maintaining ETL/ELT pipelines in enterprise environments Experience integrating with RESTful APIs Experience with Agile methodologies (Scrum, Kanban) Knowledge of data governance, security, privacy, and vulnerability management Understanding of authorization protocols (OAuth) and API integration Solid proficiency in SQL, NoSQL, and data modeling Proficiency with open-source tools such as Apache Flink, Iceberg, Spark, and PySpark Advanced Python skills for data engineering and data science (beyond Jupyter notebooks) Familiarity with big data technologies such as Spark, Hadoop, and Databricks Ability to build modular, testable, and reusable data solutions Solid grasp of data engineering concepts including: Data Catalogs Data Warehouses Data Lakes (especially Iceberg) Data Dictionaries Preferred Qualifications Experience with GitHub, Terraform, and GitHub Actions Experience with real-time data streaming (Kafka, Kinesis) Experience with feature engineering and machine learning pipelines (MLOps) Knowledge of data warehousing platforms (Snowflake, Redshift, BigQuery) Familiarity with AWS native data engineering tools: Lambda, Lake Formation, Kinesis (Firehose, Data Streams) Glue (Data Catalog, ETL, Streaming) SageMaker, Athena, Redshift (including Spectrum) Demonstrated ability to mentor and guide junior engineers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 week ago
5.5 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About the Company KPMG in India is a leading professional services firm established in August 1993. The firm offers a wide range of services, including audit, tax, and advisory, to national and international clients across various sectors. KPMG operates from offices in 14 cities, including Mumbai, Bengaluru, Chennai, and Delhi. KPMG India is known for its rapid, performance-based, industry-focused, and technology-enabled services. The firm leverages its global network to provide informed and timely business advice, helping clients mitigate risks and seize opportunities. KPMG India is committed to quality and excellence, fostering a culture of growth, innovation, and collaboration. About the job: Spark/Scala Developer Experience: 5.5 to 9 years Location: Mumbai We are seeking a skilled Spark/Scala Developer with 5.5 - 9 years of experience in Big Data engineering. The ideal candidate will have strong expertise in Scala programming, SQL, and data processing using Apache Spark within Hadoop ecosystems. Key Responsibilities: Design, develop, and implement data ingestion and processing solutions for batch and streaming workloads using Scala and Apache Spark. Optimize and debug Spark jobs for performance and reliability. Translate functional requirements and user stories into scalable technical solutions. Develop and troubleshoot complex SQL queries to extract business-critical insights. Required Skills: 2+ years of hands-on experience in Scala programming and SQL. Proven experience with Hadoop Data Lake and Big Data tools. Strong understanding of Spark job optimization and performance tuning. Ability to work collaboratively in an Agile environment. Equal Opportunity Statement KPMG India has a policy of providing equal opportunity for all applicants and employees regardless of their color, caste, religion, age, sex/gender, national origin, citizenship, sexual orientation, gender identity or expression, disability or other legally protected status. KPMG India values diversity and we request you to submit the details below to support us in our endeavor for diversity. Providing the below information is voluntary and refusal to submit such information will not be prejudicial to you.
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are hiring for one the IT product-based company Job Title: - Senior Data Engineer Exp-5+ years Location: - Gurgaon/Pune Work Mode: - Hybrid Skills: - Azure and Databricks Programming Language- Python, Powershell, .Net/Java are plus What you will do Participate in design and developed highly performing and scalble large-scale Data and Analytics products Participate in requirements grooming, analysis and design discussions with fellow developers, architects and product analysts Participate in product planning by providing estimates on user stories Participate in daily standup meeting and proactively provide status on tasks Develop high-quality code according to business and technical requirements as defined in user stories Write unit tests that will improve the quality of your code Review code for defects and validate implementation details against user stories Work with quality assurance analysts who build test cases that validate your work Demo your solutions to product owners and other stakeholders Work with other Data and Analytics development teams to maintain consistency across the products by following standards and best software development practices Provide third tier support for our product suite What you will bring 3+ years of Data Engineering and Analytics experience 2+ years of Azure and Databricks (or Apache Sparks, Hadoop and Hive) working experience Knowledge and application of the following technical skills: T-SQL/PL-SQL, PySpark, Azure Data Factory, Databricks (or Apache Sparks, Hadoop and Hive), and Power BI or equivalent Business Intelligence tools Understanding of dimension modeling and Data Warehouse concepts Programming skills such as Python, PowerShell, .Net/Java are plus Git repository experience and thorough understanding of branching and merging strategies. 2 years' experience developing in Agile Software Development Life Cycle and Scrum methodology Strong planning, and time management skills Advanced problem-solving skills and data driven Excellent written and oral communication skills Team player who fosters an environment of shared success, is passionate about always learning and improving, self-motivated, open minded, and creative What we would like to see Bachelor's degree in computer science or related field Healthcare knowledge is a plus
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
The key focus for the senior data architect is to perform planning aligned to key data solutions, build and participate the architecture capability building, performdata architecture and design, manage data architecture risk and compliance, provide design and build governance and support and communicate and share knowledge around the architecture practices, guardrails, blueprints and standards related to the data solution design. Describe The Main Activities Of The Job (description) Planning Lead data solution requirements gathering and ensure alignment with business objectives and constraints Define and refine data architecture runways for intentional architecture with the key stakeholders Provide input into business cases and costing Participate and provide data architectural runway requirements into Programme Increment (PI) Planning Architecture Capability Develop and oversee data architecture views and ensure alignment with enterprise architecture Maintain and oversee the data solution artifacts in the set enterprise repository and knowledge portals aligned to the rest of the architecture Manage the data architecture processes based on the requirements for each architype Manage change impact of the data architecture with stakeholders Develop and participate in the build of the data architecture practice with embedded architects and engineers including the relevant methods, repository and tools Manage the data architecture considering the business, application, information/data and technology viewpoints Establish, enforce and implement data standards, guardrails, frameworks, and patterns Solution Design Lead and review logical and detailed data architecture Evaluate and approve data solution options and technology selections Select appropriate technology, tools and build for the solution Oversee and maintain the data solution blueprints Drive incremental modernisation initiatives in the delivery area Risk, Governance and Compliance Identify, assess and mitigate risks at a data solution architecture level Ensure and enforce compliance with policies, standards, and regulations Lead data architecture reviews and integrate with governance functions Integrate with other governance and compliance functions to ensure continuity in managing the investment and risk for the organisation pertaining to the solution architectures Establish and provide data standards, guidance, and tools to delivery teams Implementation and Collaboration Establish and provide data solution architectures and tools to thedelivery and data engineering teams Lead and facilitate collaboration with delivery teams to achieve architecture objectives Manage and resolve deviations and ensure up-to-date data solution design documentation Identify opportunities to optimise delivery of solutions Oversee and conduct post-implementation reviews Ensure the data architecture supports CI/CD pipelines to facilitate rapid and reliable deployment of data solutions Implement automated testing frameworks for data solutions to ensure quality and reliability throughout the development lifecycle Establish performance monitoring and optimisation practices to ensure data solutions meet performance benchmarks and can scale as needed Integrate robust data security measures, including encryption, access controls, and regular security audits, into the implementation process Communication and Knowledge Sharing Communicate and advocate up-to-date data solution architecture views Communicate the relevant data standards, practices, guardrails and tools to stakeholders relevant to the solution design Ensure IT teams are well-informed and trained in architecture requirements Communicate and collaborate with stakeholders' relevant views on planning, technology assessments, risk, compliance, governance and implementation assessments Foster collaboration between data architects, data engineers, and other IT teams through regular cross-functional meetings and agile ceremonies Communicate and maintain up-to-date blueprint designs for key data solutions Ensure effective participation in the agile ceremonies (PI planning, sprint planning, retrospectives, demos) Implement regular feedback loops with stakeholders and end-users to continuously improve data solutions based on real-world usage and requirements Create a culture of knowledge sharing by organising regular workshops, training sessions, and documentation updates to keep all team members informed about the latest data architecture practices and tools Minimum Qualifications/Experience (required For The Job) Matric Degree or diploma in Information Technology, Computer Science, Engineering OR relevant diploma / degree Experience:Requires a minimum of 5 years in a technical/solution design role and a minimum of 7 years relevant IT experience Data Experience: Required a minimum of 7 years related experience in data engineering, data modeling and design and data management and governance Data Related Experience: Big Data and Analytics (e.g., Hadoop, Spark) Data Warehousing (e.g., DataBricks, Snowflake, Redshift) Master Data Management (MDM) Data Lakes and Data Mesh Metadata Management ETL/ELT Processes Data Privacy and Compliance Cloud Data Services Additional Qualifications/Experience (preferred) DAMA-DMBOK TOGAF ArchiMate Cloud Certifications (AWS, Azure) Financial Industry Experience Competencies Required Related attributes and competencies related to architecture: Critical thinking/problem solving Teamwork/collaboration Effective Communication Skills Leadership skills Knowledge and experience in architecture domains Knowledge and experience in architecture methods, frameworks and tools Solution Design Experience Agile Knowledge and Experience Cloud Knowledge and Experience Data related competencies: Data modeling, database design and data governance best practices and implementation Data architecture principles and methodologies Data integration technologies and tools Data management and governance
Posted 1 week ago
3.0 years
0 Lacs
India
Remote
Are you a talented Data Scientist (includes AI/ML Researcher, AI/ML Engineer, Data Engineer, ML Ops Engineer, QA Engineer with AI/ML focus, NLP Engineer, LLM Engineer) either, Looking for your next big challenge working remotely OR Employed , but open to offers from elite US companies to work remotely? Submit your resume to our GlobalPros.ai’s, an exclusive community of the world’s top pre-vetted developers dedicated to precisely matching you with our US employers. Globlpros.ai is followed internationally by over 13,000 employers, agencies and the world’s top developers. We are currently searching for a full-time AI/ML developer (includes AI/ML Researcher, AI/ML Engineer, Data Engineer, Data Scientist, ML Ops Engineer, QA Engineer with AI/ML focus, NLP Engineer) to work remotely for our US employer clients. What We Offer: Competitive Compensation . Compensation is negotiable and commensurate with your experience and expertise. Pre-vetting so you’re 2x more likely to be hired . Recent studies by Indeed and LinkedIn show pre-vetted candidates like you are twice as likely to be hired. Shortlist competitive advantage . Our machine learning technology matches you precisely to job requirements and because your pre-vetted ensures you're shortlisted ahead of other candidates. Personalized career support . Free one-on-one career counseling and interview prep to help guarantee you succeed. Anonymity . If you’re employed but open to offers, your profile is anonymous and is not available on our website or otherwise online. When matched with our clients, your profile is anonymous until you agree to be interviewed. So there’s no risk in submitting your resume now. We're Looking For: Experience . Must have at least 3 years of experience . Role . AI/ML developer (includes AI/ML Researcher, AI/ML Engineer, Data Engineer, Data Scientist, ML Ops Engineer, QA Engineer with AI/ML focus, NLP Engineer) Skills . TensorFlow, PyTorch, Scikit-learn Python, Java, C++, R, AWS, Azure, GCP, (SQL, NoSQL, Hadoop, Spark, Docker, Kubernetes, AWS Redshift, Google BigQuery. Willing to work full-time . (40 hours per week) . Available for an hour of assessment testing . Being deeply-vetted with a data enhanced resume and matched precisely by our machine learning algorithms substantially increases the probability of being hired quickly, at higher compensation levels over unvetted candidates. It's your substantial competitive advantage in a crowded job market.
Posted 1 week ago
0 years
20 - 216 Lacs
Indore, Madhya Pradesh, India
On-site
Tips: Provide a summary of the role, what success in the position looks like, and how this role fits into the organization overall. Responsibilities [Be specific when describing each of the responsibilities. Use gender-neutral, inclusive language.] Example: Determine and develop user requirements for systems in production, to ensure maximum usability Qualifications [Some qualifications you may want to include are Skills, Education, Experience, or Certifications.] Example: Excellent verbal and written communication skills Skills: usability,data analysis,machine learning,statistical analysis,communication skills,big data technologies (hadoop, spark),programming (python, r),data visualization,data,sql
Posted 1 week ago
0.0 - 3.0 years
0 - 0 Lacs
Pune, Maharashtra
On-site
Job Summary: The Snowflake Developer will be responsible for designing, developing, and maintaining data pipelines, data warehouses, and data models using Snowflake. They will collaborate with data analysts, data scientists, and business Client’s to ensure that the data architecture meets the needs of the organization. Key Responsibilities: Develop, design and maintain data pipelines, data warehouses, and data models in Snowflake Create and manage ETL processes to move data from various sources into Snowflake Ensure data quality, integrity, and consistency across all data sources and data models Work with client to identify requirements and design solutions that meet their needs Optimize Snowflake performance and troubleshoot issues as they arise Develop and maintain documentation of data architecture, data models, and ETL processes Stay up-to-date with Snowflake updates, new features, and best practices Participate in code reviews, testing, and debugging activities Collaborate with data analysts and data scientists to design and implement analytics solutions Qualifications: Bachelor's degree in Computer Science, Information Systems or related field Minimum of 4-7 years of experience in designing and developing data warehouses and data models using Snowflake Strong understanding of SQL and experience with database technologies such as Oracle, SQL Server, MySQL, etc. Knowledge of ETL tools and processes such as Informatica, Talend, etc. Experience with scripting languages such as Python, Perl, etc. Familiarity with data modeling tools such as ERwin, ER/Studio, etc. Strong problem-solving and analytical skills Excellent written and verbal communication skills Ability to work independently and in a team-oriented environment Preferred Qualifications: Experience with cloud technologies such as AWS, Azure, or Google Cloud Platform Certification in Snowflake or related technology Experience with Big Data technologies such as Hadoop, Spark, etc. Experience with data visualization tools such as Tableau, Power BI, etc. Familiarity with Agile development methodologies Note: This job description is not intended to be all-inclusive. The employee may perform other related duties as negotiated to meet the ongoing needs of the organization. Job Types: Full-time, Permanent Pay: ₹13,801.62 - ₹63,954.98 per month Benefits: Health insurance Provident Fund Location Type: In-person Schedule: Day shift Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Application Question(s): What is your current annual CTC in INR Lacs? What is your notice period in terms of days? Experience: Snowflake Developer: 3 years (Required) Work Location: In person
Posted 1 week ago
3.0 years
15 - 20 Lacs
Madurai, Tamil Nadu
On-site
Dear Candidate, Greetings of the day!! I am Kantha, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on LinkedIn https://www.linkedin.com/in/kantha-m-ashwin-186ba3244/ Or Email: kanthasanmugam.m@techmango.net Techmango Technology Services is a full-scale software development services company founded in 2014 with a strong focus on emerging technologies. It holds a primary objective of delivering strategic solutions towards the goal of its business partners in terms of technology. We are a full-scale leading Software and Mobile App Development Company. Techmango is driven by the mantra “Clients Vision is our Mission”. We have a tendency to stick on to the current statement. To be the technologically advanced & most loved organization providing prime quality and cost-efficient services with a long-term client relationship strategy. We are operational in the USA - Chicago, Atlanta, Dubai - UAE, in India - Bangalore, Chennai, Madurai, Trichy. Techmangohttps://www.techmango.net/ Job Title: GCP Data Engineer Location: Madurai Experience: 5+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Engineer, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 5+ years of experience in data architecture, data engineering, or enterprise data platforms. Minimum 3 years of hands-on experience in GCP Data Service. Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner. Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema). Experience with real-time data processing, streaming architectures, and batch ETL pipelines. Good understanding of IAM, networking, security models, and cost optimization on GCP. Prior experience in leading cloud data transformation projects. Excellent communication and stakeholder management skills. Preferred Qualifications: GCP Professional Data Engineer / Architect Certification. Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics. Exposure to AI/ML use cases and MLOps on GCP. Experience working in agile environments and client-facing roles. What We Offer: Opportunity to work on large-scale data modernization projects with global clients. A fast-growing company with a strong tech and people culture. Competitive salary, benefits, and flexibility. Collaborative environment that values innovation and leadership. Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Application Question(s): Current CTC ? Expected CTC ? Notice Period ? (If you are serving Notice period please mention the Last working day) Experience: GCP Data Architecture : 3 years (Required) BigQuery: 3 years (Required) Cloud Composer (Airflow): 3 years (Required) Location: Madurai, Tamil Nadu (Required) Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description About KPMG in India KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. Data Architect (Analytics) – AD Location: NCR (Preferably) Job Summary: Data Architect will be responsible for designing and managing the data architecture for data analytics projects. This role involves ensuring the integrity, availability, and security of data, as well as optimizing data systems to support business intelligence and analytics needs. Key Responsibilities Design and implement data architecture solutions to support data analytics and business intelligence initiatives. Collaborate with stakeholders to understand data requirements and translate them into technical specifications. Design and implement data systems and infrastructure setups, ensuring scalability, security, and performance. Develop and maintain data models, data flow diagrams, and data dictionaries. Ensure data quality, consistency, and security across all data sources and systems. Optimize data storage and retrieval processes to enhance performance and scalability. Evaluate and recommend data management tools and technologies. Provide guidance and support to data engineers and analysts on best practices for data architecture. Conduct assessments of data systems to identify areas for improvement and optimization. Understanding of Government of India data governance policies and regulatory requirements. Hands-on in troubleshooting complex technical problems in production environments Equal employment opportunity information KPMG India has a policy of providing equal opportunity for all applicants and employees regardless of their color, caste, religion, age, sex/gender, national origin, citizenship, sexual orientation, gender identity or expression, disability or other legally protected status. KPMG India values diversity and we request you to submit the details below to support us in our endeavor for diversity. Providing the below information is voluntary and refusal to submit such information will not be prejudicial to you. Qualifications QUALIFICATIONS Bachelor's degree in Computer Science, Information Technology, Data Science, or a related field (Master's degree preferred). Proven experience as a Data Architect or in a similar role, with a focus on data analytics projects. Strong knowledge of data architecture frameworks and methodologies. Proficiency in database management systems (e.g., SQL, NoSQL), data warehousing, and ETL processes. Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, Azure, Google Cloud). Certification in data architecture or related fields.
Posted 1 week ago
5.0 - 8.0 years
12 - 18 Lacs
Bengaluru
Work from Office
• Bachelor's degree in computer science, Information Technology, or a related field. • 3-5 years of experience in ETL development and data integration. • Proficiency in SQL and experience with relational databases such as Oracle, SQL Server, or MySQL. • Familiarity with data warehousing concepts and methodologies. • Hands-on experience with ETL tools like Informatica, Talend, SSIS, or similar. • Knowledge of data modeling and data governance best practices. • Strong analytical skills and attention to detail. • Excellent communication and teamwork skills. • Experience with Snowflake or willingness to learn and implement Snowflake-based solutions. • Experience with Big Data technologies such as Hadoop or Spark. • Knowledge of cloud platforms like AWS, Azure, or Google Cloud and their ETL services. • Familiarity with data visualization tools such as Tableau or Power BI. • Hands-on experience with Snowflake for data warehousing and analytics
Posted 1 week ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview of 66degrees 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With our unmatched engineering capabilities and vast industry experience, we help the world's leading brands transform their business challenges into opportunities and shape the future of work. At 66degrees, we believe in embracing the challenge and winning together. These values not only guide us in achieving our goals as a company but also for our people. We are dedicated to creating a significant impact for our employees by fostering a culture that sparks innovation and supports professional and personal growth along the way. Overview of Role As a Data Engineer specializing in AI/ML, you'll be instrumental in designing, building, and maintaining the data infrastructure crucial for training, deploying, and serving our advanced AI and Machine Learning models. You'll work closely with Data Scientists, ML Engineers, and Cloud Architects to ensure data is accessible, reliable, and optimized for high-performance AI/ML workloads, primarily within the Google Cloud ecosystem. Responsibilities Data Pipeline Development: Design, build, and maintain robust, scalable, and efficient ETL/ELT data pipelines to ingest, transform, and load data from various sources into data lakes and data warehouses, specifically optimized for AI/ML consumption. AI/ML Data Infrastructure: Architect and implement the underlying data infrastructure required for machine learning model training, serving, and monitoring within GCP environments. Google Cloud Ecosystem: Leverage a broad range of Google Cloud Platform (GCP) data services including, BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, Vertex AI, Composer (Airflow), and Cloud SQL. Data Quality & Governance: Implement best practices for data quality, data governance, data lineage, and data security to ensure the reliability and integrity of AI/ML datasets. Performance Optimization: Optimize data pipelines and storage solutions for performance, cost-efficiency, and scalability, particularly for large-scale AI/ML data processing. Collaboration with AI/ML Teams: Work closely with Data Scientists and ML Engineers to understand their data needs, prepare datasets for model training, and assist in deploying models into production. Automation & MLOps Support: Contribute to the automation of data pipelines and support MLOps initiatives, ensuring seamless integration from data ingestion to model deployment and monitoring. Troubleshooting & Support: Troubleshoot and resolve data-related issues within the AI/ML ecosystem, ensuring data availability and pipeline health. Documentation: Create and maintain comprehensive documentation for data architectures, pipelines, and data models. Qualifications 1-2+ years of experience in Data Engineering, with at least 2-3 years directly focused on building data pipelines for AI/ML workloads. Deep, hands-on experience with core GCP data services such as BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, and Composer/Airflow. Strong proficiency in at least one relevant programming language for data engineering (Python is highly preferred).SQL skills for complex data manipulation, querying, and optimization. Solid understanding of data warehousing concepts, data modeling (dimensional, 3NF), and schema design for analytical and AI/ML purposes. Proven experience designing, building, and optimizing large-scale ETL/ELT processes. Familiarity with big data processing frameworks (e.g., Apache Spark, Hadoop) and concepts. Exceptional analytical and problem-solving skills, with the ability to design solutions for complex data challenges. Excellent verbal and written communication skills, capable of explaining complex technical concepts to both technical and non-technical stakeholders. 66degrees is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to actual or perceived race, color, religion, sex, gender, gender identity, national origin, age, weight, height, marital status, sexual orientation, veteran status, disability status or other legally protected class.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France