Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Senior Software Engineer – Backend (Python) 📍 Location: Hyderabad (Hybrid) 🕒 Experience: 5 – 12 years About the Role: We are looking for a Senior Software Engineer – Backend with strong expertise in Python and modern big data technologies. This role involves building scalable backend solutions for a leading healthcare product-based company. Key Skills: Programming: Python, Spark-Scala, PySpark (PySpark API) Big Data: Hadoop, Databricks Data Engineering: SQL, Kafka Strong problem-solving skills and experience in backend architecture Why Join? Hybrid work model in Hyderabad Opportunity to work on innovative healthcare products Collaborative environment with modern tech stack Keywords for Search: Python, PySpark, Spark, Spark-Scala, Hadoop, Databricks, Kafka, SQL, Backend Development, Big Data Engineering, Healthcare Technology
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Job Description Senior Data Engineer Our Enterprise Data & Analytics (EDA) is looking for an experienced Senior Data Engineer to join our growing data engineering team. You’ll work in a collaborative Agile environment using the latest engineering best practices with involvement in all aspects of the software development lifecycle. You will craft and develop curated data products, applying standard architectural & data modeling practices to maintain the foundation data layer serving as a single source of truth across Zendesk . You will be primarily developing Data Warehouse Solutions in BigQuery/Snowflake using technologies such as dbt, Airflow, Terraform. What You Get To Do Every Single Day Collaborate with team members and business partners to collect business requirements, define successful analytics outcomes and design data models Serve as Data Model subject matter expert and data model spokesperson, demonstrated by the ability to address questions quickly and accurately Implement Enterprise Data Warehouse by transforming raw data into schemas and data models for various business domains using SQL & dbt Design, build, and maintain ELT pipelines in Enterprise Data Warehouse to ensure reliable business reporting using Airflow, Fivetran & dbt Optimize data warehousing processes by refining naming conventions, enhancing data modeling, and implementing best practices for data quality testing Build analytics solutions that provide practical insights into customer 360, finance, product, sales and other key business domains Build and Promote best engineering practices in areas of version control system, CI/CD, code review, pair programming Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery Work with data and analytics experts to strive for greater functionality in our data systems Basic Qualifications What you bring to the role: 5+ years of data engineering experience building, working & maintaining data pipelines & ETL processes on big data environments 5+ years of experience in Data Modeling and Data Architecture in a production environment 5+ years in writing complex SQL queries 5+ years of experience with Cloud columnar databases (We use Snowflake) 2+ years of production experience working with dbt and designing and implementing Data Warehouse solutions Ability to work closely with data scientists, analysts, and other stakeholders to translate business requirements into technical solutions. Strong documentation skills for pipeline design and data flow diagrams. Intermediate experience with any of the programming language: Python, Go, Java, Scala, we primarily use Python Integration with 3rd party API SaaS applications like Salesforce, Zuora, etc Ensure data integrity and accuracy by conducting regular data audits, identifying and resolving data quality issues, and implementing data governance best practices. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Preferred Qualifications Hands-on experience with Snowflake data platform, including administration, SQL scripting, and query performance tuning Good Knowledge in modern as well as classic Data Modeling - Kimball, Innmon, etc Demonstrated experience in one or many business domains (Finance, Sales, Marketing) 3+ completed “production-grade” projects with dbt Expert knowledge in python What Does Our Data Stack Looks Like ELT (Snowflake, Fivetran, dbt, Airflow, Kafka, HighTouch) BI (Tableau, Looker) Infrastructure (GCP, AWS, Kubernetes, Terraform, Github Actions) Please note that Zendesk can only hire candidates who are physically located and plan to work from Karnataka or Maharashtra. Please refer to the location posted on the requisition for where this role is based. Hybrid: In this role, our hybrid experience is designed at the team level to give you a rich onsite experience packed with connection, collaboration, learning, and celebration - while also giving you flexibility to work remotely for part of the week. This role must attend our local office for part of the week. The specific in-office schedule is to be determined by the hiring manager. The Intelligent Heart Of Customer Experience Zendesk software was built to bring a sense of calm to the chaotic world of customer service. Today we power billions of conversations with brands you know and love. Zendesk believes in offering our people a fulfilling and inclusive experience. Our hybrid way of working, enables us to purposefully come together in person, at one of our many Zendesk offices around the world, to connect, collaborate and learn whilst also giving our people the flexibility to work remotely for part of the week. Zendesk is an equal opportunity employer, and we’re proud of our ongoing efforts to foster global diversity, equity, & inclusion in the workplace. Individuals seeking employment and employees at Zendesk are considered without regard to race, color, religion, national origin, age, sex, gender, gender identity, gender expression, sexual orientation, marital status, medical condition, ancestry, disability, military or veteran status, or any other characteristic protected by applicable law. We are an AA/EEO/Veterans/Disabled employer. If you are based in the United States and would like more information about your EEO rights under the law, please click here. Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans pursuant to applicable federal and state law. If you are an individual with a disability and require a reasonable accommodation to submit this application, complete any pre-employment testing, or otherwise participate in the employee selection process, please send an e-mail to peopleandplaces@zendesk.com with your specific accommodation request.
Posted 1 day ago
4.0 - 6.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Morgan Stanley Model Risk - Asset Management - Associate Profile Description We’re seeking someone to join our team as an [Associate] to [Model Risk - Asset Management team] Firm Risk Management In the Firm Risk Management division, we advise businesses across the Firm on risk mitigation strategies, develop tools to analyze and monitor risks and lead key regulatory initiatives. Company Profile Morgan Stanley is an industry leader in financial services, known for mobilizing capital to help governments, corporations, institutions, and individuals around the world achieve their financial goals. Since 1935, Morgan Stanley is known as a global leader in financial services, always evolving and innovating to better serve our clients and our communities in more than 40 countries around the world. Primary Responsibilities What you’ll do in the role: Provide independent review and validation compliant with MRM policies and procedures, regulatory guidance and industry leading practices, including evaluating conceptual soundness, quality of model methodology, model limitations, data quality, and on-going monitoring of model performance Take initiatives and responsibility of end-to-end delivery of a stream of Model Validation and related Risk Management deliverables Write Model Review findings in validation documents that could be used for presentations both internally (model and tool developers, business unit managers, Audit, various global Committees) as well as externally (Regulators) Verbally communicate results and debate issues, challenges and methodologies with internal audiences including senior management Represent MRM team in interactions with regulatory and audit agencies as and when required Follow financial markets & business trends on a frequent basis to enhance the quality of Model and Tool Validation and related Risk Management deliverables What You’ll Bring To The Role Qualifications/ Skills required (essential / preferred) Masters or Doctorate degree in a quantitative discipline such as Statistics, Mathematics, Physics, Computer Science or Engineering is essential Experience in a Quant role in validation of Models, in developments of Models or in a technical role in Financial institutions e.g. Developer, is essential Strong written & verbal communication skills including debating different viewpoints and making formal presentations of complex topics to a wider audience is preferred 4-6 years of relevant work experience in a Model Validation role in a bank or financial institution Proficient programmer in Python ; knowledge of other programming languages like R, Scala, MATLAB etc. is preferred Willingness to learn new and complex topics and adapt oneself (continuous learning) is preferred Working knowledge of statistical techniques, quantitative finance and programming is essential; good understanding of various complex financial instruments is preferred Knowledge of popular machine learning techniques is preferred Relevant professional certifications like CQF, CFA or progress made towards it are preferred Desire to work in a dynamic, team-oriented, fast-paced environment focusing on challenging tasks mixing fundamental, quantitative, and market-oriented knowledge and skills is essential What You Can Expect From Morgan Stanley We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 89 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren’t just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you’ll find an opportunity to work alongside the best and the brightest, in an environment where you are supported and empowered. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There’s also ample opportunity to move about the business for those who show passion and grit in their work. To learn more about our offices across the globe, please copy and paste https://www.morganstanley.com/about-us/global-offices into your browser. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents.
Posted 1 day ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description How often have you had an opportunity to be an early member of a team that is tasked with solving a huge customer need through disruptive and innovative technology? Amazon Stores FinTech Team is seeking a Data Engineer to design and build a flagship multi-year project to inject automated planning and predictive forecasting, and help shape an end UI, built on the AWS Data Lake. We are the tech team that builds and supports Worldwide Amazon Stores – one of the fastest growing, largest and most complex Supply Chains in the world. This position requires a high level of technical expertise with data engineering concepts. Key job responsibilities Expertise and experience in building and scaling Finance Planning Applications Deliver on data architecture projects and implementation of next generation financial solutions Manage AWS resources including EC2, RDS, Redshift, Kinesis, EMR, Lambda etc Build and deliver high quality data architecture and pipelines to support customer reporting needs Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Continually improve ongoing data extraction, cleansing, validation, transformation and ingestion processes, automating or simplifying self-service support for customers Collaborate with business users, development teams, and operation engineering teams to tackle business requirements and deliver against high operational standards of system availability and reliability Dive deep to resolve problems at their root, looking for failure patterns and suggesting fixes Prepare run books, methods of procedures, tutorials, training videos on best practices for the team Build monitoring dashboards and creation of critical alarms for the system Build and enhance software to extend system, application, or tool functionality to improve business processes and meet end user needs while working within the overall system architecture Build automated unit test and regression test framework that can be leveraged across multiple data systems Build data reconciliation framework and tools that can be leveraged across multiple data systems Diagnose and resolve operational issues, perform detailed root cause analysis, respond to suggestions for enhancements Identify process improvement opportunities to drive innovation Rotational on-call availability for critical systems support Basic Qualifications 3+ years of data engineering experience 3+ years of SQL experience Experience with data modeling, warehousing and building ETL pipelines Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS 5+ years of experience working in Financial Planning & Reporting domain Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) 3+ years experience in designing and building data integration with planning technologies such as IBM Cognos Planning Analytics/TM1 Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - BLR 14 SEZ Job ID: A3013849
Posted 1 day ago
4.0 - 7.0 years
5 - 8 Lacs
Hyderābād
On-site
About NationsBenefits: At NationsBenefits, we are leading the transformation of the insurance industry by developing innovative benefits management solutions. We focus on modernizing complex back-office systems to create scalable, secure, and high-performing platforms that streamline operations for our clients. As part of our strategic growth, we are focused on platform modernization — transitioning legacy systems to modern, cloud-native architectures that support the scalability, reliability, and high performance of core back-office functions in the insurance domain. Position Overview: We are seeking a self-driven Data Engineer with 4–7 years of experience to build and optimize scalable ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. The role involves working across scrum teams to develop data solutions, ensure data governance with Unity Catalog, and support real-time and batch processing. Strong problem-solving skills, T-SQL expertise, and hands-on experience with Azure cloud tools are essential. Healthcare domain knowledge is a plus. Job Description: Work with different scrum teams to develop all the quality database programming requirements of the sprint. Experience in Azure cloud platforms like Advanced Python Programming, Databricks , Azure SQL , Data factory (ADF), Data Lake, Data storage, SSIS. Create and deploy scalable ETL/ELT pipelines with Azure Databricks by utilizing PySpark and SQL . Create Delta Lake tables with ACID transactions and schema evolution to support real-time and batch processing. Experience in Unity Catalog for centralized data governance, access control, and data lineage tracking. Independently analyse, solve, and correct issues in real time, providing problem resolution end-to-end. Develop unit tests to be able to test them automatically. Use SOLID development principles to maintain data integrity and cohesiveness. Interact with product owner and business representatives to determine and satisfy needs. Sense of ownership and pride in your performance and its impact on company's success. Critical thinker and problem-solving skills. Team player. Good time-management skills. Great interpersonal and communication skills. Mandatory Qualifications: 4-7 years of experience as a Data Engineer. Self-driven with minimal supervision. Proven experience with T-SQL programming, Azure Databricks, Spark (PySpark/Scala), Delta Lake, Unity Catalog, ADLS Gen2 Microsoft TFS, Visual Studio, Devops exposure. Experience with cloud platforms such as Azure or any. Analytical, problem-solving mindset. HealthCare domain knowledge Preferred Qualifications Healthcare Domain Knowledge
Posted 1 day ago
7.0 years
0 Lacs
Gurgaon
On-site
Position- Data Engineer Budget- 1.8 LPM Exp- 7 yrs Location- Gurgaon Minimum of 7+ years of experience in the data analytics field. Proven experience with Azure/AWS Databricks in building and optimizing data pipelines, architectures, and datasets. Strong expertise in Scala or Python, PySpark, and SQL for data engineering tasks. Ability to troubleshoot and optimize complex queries on the Spark platform. Knowledge of structured and unstructured data design, modelling, access, and storage techniques. Experience designing and deploying data applications on cloud platforms such as Azure or AWS. Hands-on experience in performance tuning and optimizing code running in Databricks environments. Strong analytical and problem-solving skills, particularly within Big Data environments. Experience with Big Data management tools and technologies including Cloudera, Python, Hive, Scala, Data Warehouse, Data Lake, AWS, Azure. Technical and Professional Skills: Must Have: Excellent communication skills with the ability to interact directly with customers. Azure/AWS Databricks. Python / Scala / Spark / PySpark. Strong SQL and RDBMS expertise. HIVE / HBase / Impala / Parquet. Sqoop, Kafka, Flume. Airflow. Job Type: Full-time Pay: ₹100,000.00 - ₹1,300,000.00 per year Schedule: Day shift Work Location: In person
Posted 1 day ago
3.0 years
3 - 10 Lacs
Chennai
On-site
DESCRIPTION Are you passionate about solving business challenges at a global scale? Amazon Employee Services is looking for an experienced Business Analyst to join Retail Business Services team and help unlock insights which take our business to the next level. The candidate will be excited about understanding and implementing new and repeatable processes to improve our employee global work authorization experiences. They will do this by partnering with key stakeholders to be curious and comfortable digging deep into the business challenges to understand and identify insights that will enable us to figure out standards to improve our ability to globally scale this program. They will be comfortable delivering/presenting these recommended solutions by retrieving and integrating artifacts in a format that is immediately useful to improve the business decision-making process. This role requires an individual with excellent analytical abilities as well as an outstanding business acumen. The candidate knows and values our customers (internal and external) and will work back from the customer to create structured processes for global expansions of work authorization, and help integrate new countries/new acquisitions into the existing program. They are experts in partnering and earning trust with operations/business leaders to drive these key business decisions. Responsibilities: Own the development and maintenance of new and existing artifacts focused on analysis of requirements, metrics, and reporting dashboards. Partner with operations/business teams to consult, develop and implement KPI’s, automated reporting/process solutions, and process improvements to meet business needs. Enable effective decision making by retrieving and aggregating data from multiple sources and compiling it into a digestible and actionable format. Prepare and deliver business requirements reviews to the senior management team regarding progress and roadblocks. Participate in strategic and tactical planning discussions. Design, develop and maintain scaled, automated, user-friendly systems, reports, dashboards, etc. that will support our business needs. Excellent writing skills, to create artifacts easily digestible by business and tech partners. Key job responsibilities Design and develop highly available dashboards and metrics using SQL and Excel/Tableau/QuickSight Understand the requirements of stakeholders and map them with the data sources/data warehouse Own the delivery and backup of periodic metrics, dashboards to the leadership team Draw inferences and conclusions, and create dashboards and visualizations of processed data, identify trends, anomalies Execute high priority (i.e. cross functional, high impact) projects to improve operations performance with the help of Operations Analytics managers Perform business analysis and data queries using appropriate tools Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area BASIC QUALIFICATIONS 3+ years of Excel or Tableau (data manipulation, macros, charts and pivot tables) experience Experience defining requirements and using data and metrics to draw business insights Experience with SQL or ETL Knowledge of data visualization tools such as Quick Sight, Tableau, Power BI or other BI packages 1+ years of tax, finance or a related analytical field experience PREFERRED QUALIFICATIONS Experience in Amazon Redshift and other AWS technologies Experience creating complex SQL queries joining multiple datasets, ETL DW concepts Experience in SCALA and Pyspark Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 day ago
2.0 years
4 - 8 Lacs
Chennai
On-site
DESCRIPTION About Amazon.com: Amazon.com strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon.com continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world's brightest technology minds come to Amazon.com to research and develop technology that improves the lives of shoppers and sellers around the world. Overview of the role The Business Research Analyst will be responsible for Data and Machine learning part of continuous improvement projects across the Discoverability space. This will require collaboration with local and global teams. The Research Analyst should be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. The Research Analyst will perform Big data analysis to identify patterns, train model to generate product to product relationship and product to brand & model relationship. The Research Analyst is also expected to continuously improve the ML/LLM solutions in terms of precision & recall, efficiency and scalability. The Research Analyst should be able to write clear and detailed functional specifications based on business requirements. Key job responsibilities As a Research Analyst, you'll collaborate with experts to develop advance machine learning or large language model (ML/LLM) solutions for business needs. You'll drive product pilots, demonstrating innovative thinking and customer focus. You'll build scalable solutions, write high-quality code, and develop state-of-the-art ML/LLM models. You'll coordinate between science and software teams, optimizing solutions. The role requires thriving in ambiguous, fast-paced environments and working independently with ML/LLM models. Key job responsibilities Collaborate and propose best in class ML/LLM solutions for business requirements Dive deep to drive product pilots, demonstrate innovation and customer obsession to steer the product roadmap Develop scalable solutions by writing high-quality code, building ML/LLM models using current research breakthroughs and implementing performance optimization techniques Coordinate design efforts between Sciences and Software teams to deliver optimized solutions Communicate technical concepts to stakeholders at all levels Ability to thrive in an ambiguous, uncertain and fast moving ML/LLMuse case developments Familiar with ML/LLM models and able to work independently. BASIC QUALIFICATIONS Bachelor's degree in math/statistics/engineering or other equivalent quantitative discipline 2+ years of relevant work experience in solving real world business problems using machine learning, deep learning, data mining and statistical algorithms Strong hands-on programming skills in Python, SQL, Hadoop/Hive. Additional knowledge of Spark, Scala, R, Java desired but not mandatory Strong analytical thinking Ability to creatively solve business problems, innovating new approaches where required and articulating ideas to a wide range of audiences using strong data, written and verbal communication skills Ability to collaborate effectively across multiple teams and stakeholders, including development teams, product management and operations. PREFERRED QUALIFICATIONS Master's degree with specialization in ML, NLP or Computer Vision preferred 3+ years relevant work experience in a related field/s (project management, customer advocate, product owner, engineering, business analysis) - Diverse experience will be favored eg. a mix of experience across different roles - In-depth understanding of machine learning concepts including developing models and tuning the hyper-parameters, as well as deploying models and building ML service - Technical expertise, experience in Data science, ML and Statistics Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 day ago
6.0 years
0 Lacs
India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
7.0 years
15 - 18 Lacs
Mumbai Metropolitan Region
On-site
Position: Business Intelligence Developer 27165 Location: India (Multiple Offices) Overview A leading global consulting and advisory firm is seeking a Business Intelligence Developer to join its expanding Technology Organization. This role will be part of the Information Solutions team and will report directly to the Head of Information Solutions. The successful candidate will play a pivotal role in building and operating modern data platforms, pipelines, and analytics solutions aligned with the enterprise’s data strategy. This position requires strong cross-functional collaboration, technical expertise, and a problem-solving mindset to translate business requirements into actionable intelligence. Key Responsibilities Design and build ETL processes to ingest and transform data from multiple source systems into integrated business intelligence environments. Develop reports and dashboards using tools such as Power BI, SSRS, and related BI technologies. Ensure data quality through automated processes and validation routines. Contribute to the creation and maintenance of data dictionaries and catalogs. Support the development of data marts and data lakes to empower strategic business initiatives. Translate business problems into analytics solutions and interpret findings into actionable business insights. Conduct requirement-gathering sessions and propose innovative, data-driven solutions. Lead or participate in the design, development, and maintenance of complex BI dashboards and integrated applications. Manage development resources when required to deliver BI products and services. Conduct in-depth analysis and support the interpretation and adoption of BI tools across stakeholders. Proactively identify opportunities for process optimization, risk mitigation, and revenue growth through data insights. Provide technical support for BI platforms and assist with troubleshooting and performance tuning. Lead or support design sessions for end-to-end data integration solutions. Support the delivery of scalable, reusable, and sustainable BI architecture for the firm. Required Qualifications 5–7+ years of experience in business intelligence using Microsoft technologies, including SQL Server, SSIS, Power BI, SSRS, SSAS, or cloud-based equivalents (e.g., Azure). Hands-on experience with large-scale ETL pipelines and data integration processes. In-depth experience working with data warehouses, dimensional modeling, and analytics architecture. Proficiency in developing paginated reports and dashboards using Power BI or comparable tools (Tableau, Qlik, etc.). Familiarity with Power BI Cloud Services and Power BI Report Server. Strong command of Excel for advanced data manipulation and reporting. Skilled in automation, performance tuning, and monitoring of data pipelines. Strong communication and documentation skills. Ability to operate independently and manage competing priorities in a dynamic environment. Preferred Qualifications Experience with advanced analytics using R, Python, Scala, or similar tools. Experience with cloud data platforms such as Azure, AWS, or Snowflake. Familiarity with DevOps practices and tools, including CI/CD pipelines. Experience working in or with data lake environments and reference data architectures. Experience setting up and maintaining Power BI Report Server is advantageous. Skills: data warehousing,report development,excel,power bi,intelligence,dimensional modeling,etl processes,automation,data integration,azure,communication,ssrs,sql server,ssis,business intelligence,performance tuning,ssas,data,analytics
Posted 1 day ago
6.0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
6.0 years
0 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
12.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose – to uplift everyone, everywhere by being the best way to pay and be paid. Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa. Job Description Team Summary Visa Consulting and Analytics (VCA) drives tangible, impactful and financial results for Visa’s network clients, including both financial services and merchants. Drawing on our expertise in strategy consulting, data analytics, brand management, marketing, operational and macroeconomics, Visa Consulting and Analytics solves the most strategic problems for our clients. The India & South Asia (INSA) Consulting Market team within Visa Consulting & Analytics provides consulting and solution services for Visa’s largest issuers in India, Sri Lanka, Bangladesh, Nepal, Bhutan & Maldives. We apply deep expertise in the payments industry to provide solutions to assist clients with their key business priorities, drive growth and improve profitability. The VCA team provides a comprehensive range of consulting services to deliver solutions that address unique challenges in areas such as improving profitability, strategic growth, customer experience, digital payments and running risk. The individual will be part of VCA Data Science geographic team cluster of India and South Asia (INSA) markets and will be responsible for sales and delivery of data science and analytics based solutions to Visa Clients. What the Director Data Science, Visa Consulting & Analytics does at Visa: The Director, Data Science at Visa Consulting & Analytics (VCA) blends technical expertise with business acumen to deliver impactful, data-driven solutions to Visa’s clients, shaping the future of payments through analytics and innovation. This role combines hands-on modeling with strategic leadership, leading the adoption of Generative AI (Gen AI) and Agentic AI into Visa’s offerings. This is onsite role based out of Mumbai. The role will require travel. Key Responsibilities Commercial Acumen/Business Development Collaborate with internal and external clients to comprehend their strategic business inquiries, leading project scoping and design to effectively address those questions by leveraging Visa's data. Drive revenue outcomes for VCA, particularly focusing on data science offerings such as ML Model solutions , data collaboration, and managed service verticals within data science. Technical Leadership Design, develop, and implement advanced analytics and machine learning models to solve complex business challenges for Visa’s clients leveraging Visanet data as well as Client Data Drive the integration and adoption of Gen AI and Agentic AI technologies within Visa’s data science offerings. Ensure the quality, performance, and scalability of data-driven solutions. Strategic Business Impact Translate client needs and business challenges into actionable data science projects that deliver measurable value. Collaborate with cross-functional teams including Consulting, Sales, Product, and Data Engineering to align analytics solutions with business objectives. Present insights and recommendations to both technical and non-technical stakeholders. Team Leadership & Development Mentor and manage a team of data scientists and analysts, fostering a culture of innovation, collaboration, and continuous learning. Set priorities, provide technical direction, and oversee the end-to-end delivery of analytics projects. Innovation & Best Practices Stay abreast of emerging trends in AI and data science, particularly in Gen AI and Agentic AI. Champion the adoption of new methodologies and tools to enhance Visa’s analytics capabilities and value to clients. Represent VCA as a thought leader in internal and external forums. This is a hybrid position. Expectation of days in office will be confirmed by your Hiring Manager. Qualifications Basic Qualifications: • Advanced degree (MS/PhD) in Computer Science, Statistics, Mathematics, Engineering, or a related filed from Tier-1 institute e.g. IIT, ISI, DSE, IISc, etc. • 12+ years of experience in data science, analytics, or related fields, including 3 + years in a leadership/management role. • Proven track record of building and leading high-performing data science teams. • Expertise in statistical analysis, machine learning, data mining, and predictive modeling. • Proficiency in programming languages such as Python, R, or Scala, and experience with ML frameworks (e.g., scikit-learn, TensorFlow, PyTorch). • Excellent communication, presentation, and stakeholder management skills. Preferred Qualifications: • Exposure/prior work experience in payments and/or banking industry • Experience in consulting space or matrix team structure • Familiarity with cloud platforms (AWS, Azure, GCP) and big data technologies (Spark, Hadoop). • Publication or conference experience in the data science/AI community. Additional Information Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law.
Posted 1 day ago
2.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
The Database Engineer will be actively involved in the evaluation, review, and management of databases. You will be part of a team who supports a range of Applications and databases. You should be well versed in database administration which includes installation, performance tuning and troubleshooting. A strong candidate will be able to rapidly troubleshoot complex technical problems under pressure, implement solutions that are scalable, while managing multiple customer groups. What You Will Do Support large-scale enterprise data solutions with a focus on high availability, low latency and scalability. Provide documentation and automation capabilities for Disaster Recovery as part of application deployment. Build infrastructure as code (IAC) patterns that meet security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Build CI/CD pipelines for build, test and deployment of application and cloud architecture patterns, using platform (Jenkins) and cloud-native toolchains. Knowledge of the configuration of monitoring solutions and the creation of dashboards (DPA, DataDog, Big Panda, Prometheus, Grafana, Log Analytics, Chao Search) What Experience You Need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 2-5 years of experience in database administration, system administration , performance tuning and automation. 1+ years of experience developing and/or administering software in public cloud Experience in managing Traditional databases like SQLServer/Oracle/Postgres/MySQL and providing 24*7 Support. Experience in implementing and managing Infrastructure as Code (e.g. Terraform, Python, Chef) and source code repository (GitHub). Demonstrable cross-functional knowledge with systems, storage, networking, security and databases Experience in designing and building production data pipelines from data ingestion to consumption within a hybrid big data architecture, using Cloud Native GCP, Java, Python, Scala, SQL etc. Proficiency with continuous integration and continuous delivery tooling and practices Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Automation - Uses knowledge of best practices in coding to build pipelines for build, test and deployment of processes/components; Understand technology trends and use knowledge to identify factors that can be used to automate system/process deployments Data / Database Management - Uses knowledge of Database operations and applies engineering skills to improve resilience of products/services. Designs, codes, verifies, tests, documents, modifies programs/scripts and integrated software services; Applies industry best standards and tools to achieve a well-engineered result. Operational Excellence - Prioritizes and organizes own work; Monitors and measures systems against key metrics to ensure availability of systems; Identifies new ways of working to make processes run smoother and faster Technical Communication/Presentation - Explains technical information and the impacts to stakeholders and articulates the case for action; Demonstrates strong written and verbal communication skills Troubleshooting - Applies a methodical approach to routine issue definition and resolution; Monitors actions to investigate and resolve problems in systems, processes and services; Determines problem fixes/remedies. Assists with the implementation of agreed remedies and preventative measures; Analyzes patterns and trends
Posted 1 day ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Role: Data Engineer Experience: 7+ Years Mode: Hybrid Key Responsibilities: • Design and implement enterprise-grade Data Lake solutions using AWS (e.g., S3, Glue, Lake Formation). • Define data architecture patterns, best practices, and frameworks for handling large-scale data ingestion, storage, computing and processing. • Optimize cloud infrastructure for performance, scalability, and cost-effectiveness. • Develop and maintain ETL pipelines using tools such as AWS Glue or similar platforms. CI/CD Pipelines managing in DevOps. • Create and manage robust Data Warehousing solutions using technologies such as Redshift. • Ensure high data quality and integrity across all pipelines. • Design and deploy dashboards and visualizations using tools like Tableau, Power BI, or Qlik. • Collaborate with business stakeholders to define key metrics and deliver actionable insights. • Implement best practices for data encryption, secure data transfer, and role-based access control. • Lead audits and compliance certifications to maintain organizational standards. • Work closely with cross-functional teams, including Data Scientists, Analysts, and DevOps engineers. • Mentor junior team members and provide technical guidance for complex projects. • Partner with stakeholders to define and align data strategies that meet business objectives. Qualifications & Skills: • Strong experience in building Data Lakes using AWS Cloud Platforms Tech Stack. • Proficiency with AWS technologies such as S3, EC2, Glue/Lake Formation (or EMR), Quick sight, Redshift, Athena, Airflow (or) Lambda + Step Functions + Event Bridge, Data and IAM. • Expertise in AWS tools that includes Data Lake Storage, Compute, Security and Data Governance. • Advanced skills in ETL processes, SQL (like Cloud SQL, Aurora, Postgres), NoSQL DB’s (like DynamoDB, MongoDB, Cassandra) and programming languages (e.g., Python, Spark, or Scala). Real-time streaming applications preferably in Spark, Kafka, or other streaming platforms. • AWS Data Security: Good Understanding of security concepts such as: Lake formation, IAM, Service roles, • Encryption, KMS, Secrets Manager. • Hands-on experience with Data Warehousing solutions and modern architectures like Lakehouse’s or Delta Lake. Proficiency in visualization tools such as Tableau, Power BI, or Qlik. • Strong problem-solving skills and ability to debug and optimize application applications for performance. • Strong understanding of Database/SQL for database operations and data management. • Familiarity with CI/CD pipelines and version control systems like Git. • Strong understanding of Agile methodologies and working within scrum teams. Preferred Qualifications: • Bachelor of Engineering degree in Computer Science, Information Technology, or a related field. • AWS Certified Solutions Architect – Associate (required). • Experience with Agile/Scrum methodologies and design sprints.
Posted 1 day ago
0.0 - 3.0 years
16 - 18 Lacs
Bengaluru, Karnataka
On-site
Location ; Bangalore , Electronics City As an experienced Full Stack Developer you will have opportunities to work at all levels of our technology stack, from the customer facing dashboards and back-end business logic, to the high volume data collecting and processing. As a Full Stack Developer you should be comfortable around a range of different technologies and languages, and with the integration of third-party libraries and development frameworks. Work with project stakeholders to understand requirements and ideate software solutions Design client-side and server-side architectures Build front-end applications delivering on usability and performance Build back-end services for scalability and reliability Write effective APIs and build to third-party APIs Adhere to security and data protection standards and requirements Instrument and test software to ensure the highest quality Monitor, troubleshoot, debug and upgrade production systems Write technical documentation REQUIREMENTS Proven experience as a Full Stack Developer or similar role Comfortable with Golang, Scala, Python, and Kafka, or the desire to learn these technologies Experience in front-end web development helping to create customer facing user interfaces; experience with ReactJS a plus Familiarity with databases and data warehousing such as PostgreSQL, MongoDB, Snowflake Familiarity with Amazon Web Services cloud platform Attention to detail, strong organizational skills, and a desire to be part of a team Degree in Computer Science, Engineering, or relevant field Job Types: Full-time, Permanent Pay: ₹1,600,000.00 - ₹1,800,000.00 per year Benefits: Health insurance Paid sick time Provident Fund Ability to commute/relocate: Bangalore, Karnataka: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Electronic city Bangalore are you ok to work in this location Python backend & React JS must Experience: Full-stack development: 3 years (Required) Location: Bangalore, Karnataka (Required) Willingness to travel: 100% (Required) Work Location: In person
Posted 1 day ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
what is CRED? CRED is an exclusive community for India’s most trustworthy and creditworthy individuals, where the members are rewarded for good financial behavior. CRED was born out of a need to bring back the focus on a long lost virtue, one of trust, the idea being to create a community centered around this virtue. a community that constantly strives to become more virtuous in this regard till they finally scale their behavior to create a utopia where being trustworthy is the norm and not the exception. to build a community like this requires a community of its own; a community special in its own way, working towards making this vision come true. here’s a thought experiment: what do you get when you put a group of incredibly passionate and driven people and entrust them with the complete freedom to chase down their goals in a completely uninhibited manner? answer: you get something close to what we have at CRED; CRED just has it better. here’s what will be in store for you at CRED once you join as a part of the team: own end-to-end business problems and metrics, build and implement ML solutions using cutting-edge technology create scalable solutions to business problems using statistical techniques, machine learning, and NLP design, experiment and evaluate highly innovative models for predictive learning work closely with software engineering teams to drive real-time model experiments, implementations, and new feature creations establish scalable, efficient, and automated processes for large-scale data analysis, model development, deployment, experimentation, and evaluation research and implement novel machine learning and statistical approaches publish and/or talk about your work at external conferences 5+ years of experience in data science in-depth understanding of modern machine learning techniques and their mathematical underpinnings demonstrated ability to build PoCs for complex, ambiguous problems and scale them up strong programming skills (Python, Java, or Scala preferred) high proficiency in at least one of the following broad areas: machine learning, statistical modeling/inference, information retrieval, data mining, NLP how is life at CRED? working at CRED would instantly make you realize one thing: you are working with the best talent around you. not just in the role you occupy, but everywhere you go. talk to someone around you; most likely you will be talking to a singer, standup comic, artist, writer, an athlete, maybe a magician. at CRED people always have talent up their sleeves. with the right company, even conversations can be rejuvenating. at CRED, we guarantee a good company.hard truths: pushing oneself comes with the role. and we realise pushing oneself is hard work. which is why CRED is in the continuous process of building an environment that helps the team rejuvenate oneself: included but not limited to a stacked, in-house pantry, with lunch and dinner provided for all the team members, paid sick leaves and a comprehensive health insurance. to make things smoother and to make sure you spend time and energy only on the most important things, CRED strives to make every process transparent: there are no work timings because we do not believe in archaic methods of calculating productivity, your work should speak for you. there are no job designations because you will be expected to hold down roles that cannot be described in one word. since trust is a major virtue in the community we have built, we make it a point to highlight it in the community behind CRED: all our employees get their salaries before their joining date. a show of trust that speaks volumes because of the skin in the game. there are many more such eccentricities that make CRED what it is but that’s for one to discover. if you feel at home reading this, get in touch.
Posted 1 day ago
7.0 years
0 Lacs
Gurgaon Rural, Haryana, India
On-site
Minimum of 7+ years of experience in the data analytics field. Proven experience with Azure/AWS Databricks in building and optimizing data pipelines, architectures, and datasets. Strong expertise in Scala or Python, PySpark, and SQL for data engineering tasks. Ability to troubleshoot and optimize complex queries on the Spark platform. Knowledge of structured and unstructured data design, modelling, access, and storage techniques. Experience designing and deploying data applications on cloud platforms such as Azure or AWS. Hands-on experience in performance tuning and optimizing code running in Databricks environments. Strong analytical and problem-solving skills, particularly within Big Data environments. Experience with Big Data management tools and technologies including Cloudera, Python, Hive, Scala, Data Warehouse, Data Lake, AWS, Azure. Technical and Professional Skills: Must Have: Excellent communication skills with the ability to interact directly with customers. Azure/AWS Databricks. Python / Scala / Spark / PySpark. Strong SQL and RDBMS expertise. HIVE / HBase / Impala / Parquet. Sqoop, Kafka, Flume. Airflow.
Posted 1 day ago
5.0 - 7.0 years
25 - 28 Lacs
Pune, Maharashtra, India
On-site
Job Description We are looking for a Big Data Engineer who will work on building, and managing Big Data Pipelines for us to deal with the huge structured data sets that we use as an input to accurately generate analytics at scale for our valued Customers. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Core Responsibilities Design, build, and maintain robust data pipelines (batch or streaming) that process and transform data from diverse sources. Ensure data quality, reliability, and availability across the pipeline lifecycle. Collaborate with product managers, architects, and engineering leads to define technical strategy. Participate in code reviews, testing, and deployment processes to maintain high standards. Own smaller components of the data platform or pipelines and take end-to-end responsibility. Continuously identify and resolve performance bottlenecks in data pipelines. Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior Individual contributor on the multiple products and features we have. Required Qualifications 5 to 7 years of experience in Big Data or data engineering roles. JVM based languages like Java or Scala are preferred. For someone having solid Big Data experience, Python would also be OK. Proven and demonstrated experience working with distributed Big Data tools and processing frameworks like Apache Spark or equivalent (for processing), Kafka or Flink (for streaming), and Airflow or equivalent (for orchestration). Familiarity with cloud platforms (e.g., AWS, GCP, or Azure), including services like S3, Glue, BigQuery, or EMR. Ability to write clean, efficient, and maintainable code. Good understanding of data structures, algorithms, and object-oriented programming. Tooling & Ecosystem Use of version control (e.g., Git) and CI/CD tools. Experience with data orchestration tools (Airflow, Dagster, etc.). Understanding of file formats like Parquet, Avro, ORC, and JSON. Basic exposure to containerization (Docker) or infrastructure-as-code (Terraform is a plus). Skills: airflow,pipelines,data engineering,scala,ci,python,flink,aws,data orchestration,java,kafka,gcp,parquet,orc,azure,cd,dagster,ci/cd,git,avro,terraform,json,docker,apache spark,big data
Posted 1 day ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Us: Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. Job Summary: Build systems for collection & transformation of complex data sets for use in production systems Collaborate with engineers on building & maintaining back-end services Implement data schema and data management improvements for scale and performance Provide insights into key performance indicators for the product and customer usage Serve as team's authority on data infrastructure, privacy controls and data security Collaborate with appropriate stakeholders to understand user requirements Support efforts for continuous improvement, metrics and test automation Maintain operations of live service as issues arise on a rotational, on-call basis Verify whether data architecture meets security and compliance requirements and expectations .Should be able to fast learn and quickly adapt at rapid pace. java/scala, SQL, Minimum Qualifications: Bachelor's degree in computer science, computer engineering or a related field, or equivalent experience 3+ years of progressive experience demonstrating strong architecture, programming and engineering skills. Firm grasp of data structures, algorithms with fluency in programming languages like Java, Python, Scala. Strong SQL language and should be able to write complex queries. Strong Airflow like orchestration tools. Demonstrated ability to lead, partner, and collaborate cross functionally across many engineering organizations Experience with streaming technologies such as Apache Spark, Kafka, Flink. Backend experience including Apache Cassandra, MongoDB and relational databases such as Oracle, PostgreSQL AWS/GCP solid hands on with 4+ years of experience. Strong communication and soft skills. Knowledge and/or experience with containerized environments, Kubernetes, docker. Experience in implementing and maintained highly scalable micro services in Rest, Spring Boot, GRPC. Appetite for trying new things and building rapid POCs" Key Responsibilities : Design, develop, and maintain scalable data pipelines to support data ingestion, processing, and storage Implement data integration solutions to consolidate data from multiple sources into a centralized data warehouse or data lake Collaborate with data scientists and analysts to understand data requirements and translate them into technical specifications Ensure data quality and integrity by implementing robust data validation and cleansing processes Optimize data pipelines for performance, scalability, and reliability. Develop and maintain ETL (Extract, Transform, Load) processes using tools such as Apache Spark, Apache NiFi, or similar technologies .Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal downtimeImplement best practices for data management, security, and complianceDocument data engineering processes, workflows, and technical specificationsStay up-to-date with industry trends and emerging technologies in data engineering and big data. Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 25 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story!
Posted 1 day ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Lead Platform Engineer – AWS Data Platform Location: Hybrid – Hyderabad, Telangana Experience: 10+ years Employment Type: Full-Time Apply Now --- About the Role Infoslab is hiring on behalf of our client, a leading healthcare technology company committed to transforming healthcare through data. We are seeking a Lead Platform Engineer to architect, implement, and lead the development of a secure, scalable, and cloud-native data platform on AWS. This role combines deep technical expertise with leadership responsibilities. You will build the foundation that supports critical business intelligence, analytics, and machine learning applications across the organization. --- Key Responsibilities Architect and build a highly available, cloud-native data platform using AWS services such as S3, Glue, Redshift, Lambda, and ECS. Design reusable platform components and frameworks to support data engineering, analytics, and ML pipelines. Build and maintain CI/CD pipelines, GitOps workflows, and infrastructure-as-code using Terraform. Drive observability, operational monitoring, and incident response processes across environments. Ensure platform security, compliance (HIPAA, SOC2), and audit-readiness in partnership with InfoSec. Lead and mentor a team of platform engineers, promoting best practices in DevOps and cloud infrastructure. Collaborate with cross-functional teams to deliver reliable and scalable data platform capabilities. --- Required Skills and Experience 10+ years of experience in platform engineering, DevOps, or infrastructure roles with a data focus. 3+ years in technical leadership or platform engineering management. Deep experience with AWS services, including S3, Glue, Redshift, Lambda, ECS, and Athena. Strong hands-on experience with Python or Scala, and automation tooling. Proficient in Terraform and CI/CD tools (GitHub Actions, Jenkins, etc.). Advanced knowledge of Apache Spark for both batch and streaming workloads. Proven track record of building secure, scalable, and compliant infrastructure. Strong understanding of observability, reliability engineering, and infrastructure automation. --- Preferred Qualifications Experience with containerization and orchestration (Docker, Kubernetes). Familiarity with Data Mesh principles or domain-driven data platform design. Background in healthcare or other regulated industries. Experience integrating data platforms with BI tools like Tableau or Looker. --- Why Join Contribute to a mission-driven client transforming healthcare through intelligent data platforms. Lead high-impact platform initiatives that support diagnostics, research, and machine learning. Work with modern engineering practices including IaC, GitOps, and serverless architectures. Be part of a collaborative, hybrid work culture focused on innovation and technical excellence.
Posted 1 day ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experience Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). Take part in evaluation of new data tools, POCs and provide suggestions. Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM
Posted 1 day ago
4.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Supports, develops and maintains a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with the Business and IT teams to understand the requirements to best leverage the technologies to enable agile data delivery at scale. Key Responsibilities Implements and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Implements methods to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Develops reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Develops physical data models and implements data storage architectures as per design guidelines. Analyzes complex data elements and systems, data flow, dependencies, and relationships in order to contribute to conceptual physical and logical data models. Participates in testing and troubleshooting of data pipelines. Develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses agile development technologies, such as DevOps, Scrum, Kanban and continuous improvement cycle, for data driven application. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience 4-5 Years of experience. Relevant experience preferred such as working in a temporary student employment, intern, co-op, or other extracurricular team activities. Knowledge of the latest technologies in data engineering is highly preferred and includes: Exposure to Big Data open source SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Familiarity developing applications requiring large file movement for a Cloud-based environment Exposure to Agile software development Exposure to building analytical solutions Exposure to IoT technology Qualifications Work closely with business Product Owner to understand product vision. Participate in DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. Work under limited supervision to design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. Responsible for creation of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP) with guidance and help from senior data engineers. Take part in evaluation of new data tools, POCs with guidance and help from senior data engineers. Take ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization under limited supervision. Assist to resolve issues that compromise data accuracy and usability. Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Intermediate level expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. API: Working knowledge of API to consume data from ERP, CRM
Posted 1 day ago
8.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category Engineering Experience Sr. Manager Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Senior Manager- Machine Learning Engineering Our mission at Capital One is to create trustworthy, reliable and human-in-the-loop AI systems, changing banking for good. For years, Capital One has been leading the industry in using machine learning to create real-time, intelligent, automated customer experiences. From informing customers about unusual charges to answering their questions in real time, our applications of AI & ML are bringing humanity and simplicity to banking. Because of our investments in public cloud infrastructure and machine learning platforms, we are now uniquely positioned to harness the power of AI. We are committed to building world-class applied science and engineering teams and continue our industry leading capabilities with breakthrough product experiences and scalable, high-performance AI infrastructure. At Capital One, you will help bring the transformative power of emerging AI capabilities to reimagine how we serve our customers and businesses who have come to love the products and services we build. We are looking for an experienced Senior Manager, Machine Learning Engineering in MLX Platform to help us build the Model Governance and Observability systems. In this role you will work on to build robust SDKs, platform components to collect metadata, traces and parameters of models running at scale and work on cutting edge Gen AI frameworks and their instrumentation. You will also lead the teams to analyze and optimize model performance, latency, and resource utilization to maintain high standards of efficiency, reliability and compliance. You will build and lead a highly talented software engineering team to unlock innovation, speed to market and real time processing. This leader must be a deep technical expert and thought leaders that help accelerate adoption of the engineering practices, up skill themselves with the industry innovations, trends and practices in Software Engineering and Machine Learning. Success in the role requires an innovative mind, a proven track record of delivering highly available, scalable and resilient governance and observability platforms. What You’ll Do Lead, manage and grow multiple teams of product focused software engineers and managers to build and scale Machine Learning Model Governance and AI Observability platforms & SDK’s Mentor and guide professional and technical development of engineers on your team Work with product leaders to define the strategy, roadmap and destination architecture Bring a passion to stay on top of tech trends, experiment with and learn new technologies, participate in internal & external technology communities, and mentor other members of the engineering community Encourage innovation, implementation of state of the art ( SOTA) research technologies, inclusion, outside-of-the-box thinking, teamwork, self-organization, and diversity Work on cutting edge Gen AI frameworks/LLMs and provide observability using open Telemetry Lead the team in Search ( semantic and key-word based ) and required pipelines to extract the data , ingest the data , convert into embeddings and expose the APIs Analyze and optimize model performance, latency, and resource utilization to maintain high standards of efficiency, reliability and compliance Collaborate as part of a cross-functional Agile team to create and enhance software that enables state of the art, next generation big data and machine learning applications. Basic Qualifications: Bachelor's degree in Computer Science, Computer Engineering or a technical field At least 15 years of experience programming with Python, Go, Scala, or C/C++ At least 5 years of experience designing and building and deploying enterprise AI or ML applications or platforms. At least 3 years of experience implementing full lifecycle ML automation using MLOps(scalable development to deployment of complex data science workflows) At least 4 years of experience leading teams developing Machine Learning solutions and scaling At least 8 years of people management experience and experience in managing managers. Preferred Qualifications: Master’s degree or PhD in Engineering, Computer Science, a related technical field, or equivalent practical experience with a focus on modern AI techniques. Strong problem solving and analytical skills with the ability to work independently with ownership, and as a part of a team with a strong sense of responsibilities. Experience designing large-scale distributed platforms and/or systems in cloud environments such as AWS, Azure, or GCP. Experience architecting cloud systems for security, availability, performance, scalability, and cost. Experience with delivering very large models through the MLOps life cycle from exploration to serving Ability to move fast in an environment with ambiguity at times, and with competing priorities and deadlines. Experience at tech and product-driven companies/startups preferred. Ability to iterate rapidly with researchers and engineers to improve a product experience while building the core platform components for Observability and Model Governance Experience with one or multiple areas of Gen AI technology stack including prompt engineering, guardrails, vector databases/knowledge bases, LLMs hosting, Retrieval, pre-training and fine-tuning , understanding of Observability of Gen AI stack ( Agentic AI, Opensource Gen AI Observability frameworks) and open Telemetry) No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough