Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
1.0 - 2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a motivated Data Engineer with 1-2 years of experience to join our team. The ideal candidate will have hands-on experience in Databricks, AWS, Scala, and Spark and will be responsible for building, optimizing, and maintaining scalable data pipelines. You'll also collaborate with global teams, work with advanced cloud technologies, and contribute to a forward-thinking data ecosystem that powers the future of automotive engineering. Company profile: ConnectedX: Empowering Tech Leaders with Tailored Solutions Key Responsibilities: · Design, develop, and maintain robust ETL pipelines for data processing. · Work with Databricks to perform data transformations and analytics. · Develop scalable solutions using Scala and Apache Spark for big data processing. · Collaborate with teams to ensure efficient data flow and integration. · Optimize data storage and processing on AWS cloud infrastructure. · Monitor and troubleshoot performance issues in data pipelines. Required Skills & Qualifications: · Excellent communication and ability to work collaboratively in a fast-paced environment. · 1-2 years of experience in Data Engineering. · Proficiency in Databricks, AWS, Scala, and Spark. · Strong understanding of data modeling, ETL processes, and performance tuning. · Familiarity with SQL and database management systems Who Can Apply: Only candidates who can join immediately or within 2 weeks can apply. Ideal for those seeking technical growth and work on a global project with cutting-edge technologies. Best suited for professionals passionate about innovation and problem-solving Show more Show less
Posted 1 week ago
3.0 years
1 - 3 Lacs
Calcutta
Remote
3+ years using ETL data movement tools Experience implementing a data pipeline using Lambda, Fargate etc Extensive experience with AWS data services: S3, RDS, EMR, ECS, Document DB, Step functions, Athena etc Data Storage Platform – MongoDB and Postgres Extensive experience writing and tuning SQL queries Programming in Python is required and one or more languages (Java, Scala or GoLang) will be an added advantage. Experience in Big data technologies like HDFS, Kafka, Spark, Flink etc is a plus. Job description: - Hands-on experience with implementation of real-time & batch use cases on AWS Lambda, Fargate etc in Python managing data flows that integrate information from various sources into a common pool implementing data pipelines based on the ETL model. - Strong in Development, problem-solving skills, and algorithms with specialization in Data Cloud development effort focusing on scalability, quality and performance - Responsible for data ingestion, solution design, use case development and post production use case support and enhancements. - Expert in Python programming skills. - Possess sound knowledge on AWS platform technologies and architecture. - Working experience in the Agile development environment. Discover a career with a greater purpose at CBNITS Build resilience and nimbleness through automation. Clearly define and evangelise your mission/vision to the organisation. Recognize and pay off technical debt. See your people, measure your data. BE A PART OF THE SMARTEST TEAM This is your chance to work in a team that is full of smart people with excellent tech knowledge. GET RECOGNIZED FOR YOUR CONTRIBUTION Even your smallest contribution will get recognised. We express real care that goes beyond the standard pay check and benefits package. FLEXIBLE WORKING HOUR Work from home and work flexible hours, we allow you to tailor your work to suit your life outside the office. CAREER DEVELOPMENT AND OPPORTUNITIES From arranging virtual workshops to e-learning, we make it easy for employees to improve their core skills. WHO WE ARE CBNITS LLC an MNC company in Fremont, USA is the place where you are inspired to explore your passions, where your talent is nurtured and cultivated. We have one development centre in India (Kolkata) providing full IT solutions to our clients from the last 7 years. We are mostly dealing with projects like - Big Data Hadoop, Dynamics 365, IoT, SAP, Machine Learning, Deep Learning, Blockchain, Flutter, React JS & React Native, DevOps & Cloud AWS, Golang etc.
Posted 1 week ago
0 years
8 - 10 Lacs
Udaipur
On-site
About the job Role Description This is a full-time on-site role for a Tech Lead (AI and Data) located in Bhopal. The Tech Lead will be responsible for managing and overseeing the technical execution of AI and data projects. Daily tasks involve troubleshooting, providing technical support, supervising IT-related activities, and ensuring the team is trained and well-supported. The Tech Lead will also collaborate with Kadel Labs to ensure successful product development and implementation. Tech Skills Here are six key technical skills an AI Tech Lead should possess: Machine Learning & Deep Learning – Strong grasp of algorithms (supervised, unsupervised, reinforcement) – Experience building and tuning neural networks (CNNs, RNNs, transformers) Data Engineering & Pipeline Architecture – Designing ETL/ELT workflows, data lakes, and feature stores – Proficiency with tools like Apache Spark, Kafka, Airflow, or Databricks Model Deployment & MLOps – Containerization (Docker) and orchestration (Kubernetes) for scalable inference – CI/CD for ML (e.g. MLflow, TFX, Kubeflow) and automated monitoring of model drift Cloud Platforms & Services – Hands-on with AWS (SageMaker, Lambda), Azure (ML Studio, Functions), or GCP (AI Platform) – Infrastructure-as-Code (Terraform, ARM templates) for reproducible environments Software Engineering Best Practices – Strong coding skills in Python (TensorFlow, PyTorch, scikit-learn) and familiarity with Java/Scala or Go – API design (REST/GraphQL), version control (Git), unit testing, and code reviews Data Security & Privacy in AI – Knowledge of PII handling, differential privacy, and secure data storage/encryption – Understanding of compliance standards (GDPR, HIPAA) and bias mitigation techniques Other Qualifications Troubleshooting and Technical Support skills Experience in Information Technology and Customer Service Ability to provide Training and guidance to team members Strong leadership and project management skills Excellent communication and collaboration abilities Experience in AI and data technologies is a plus Bachelor's or Master's degree in Computer Science, Information Technology, or a related field Job Types: Full-time, Permanent Pay: ₹875,652.61 - ₹1,016,396.45 per year Benefits: Health insurance Schedule: Day shift Monday to Friday Work Location: In person
Posted 1 week ago
4.0 years
0 Lacs
India
On-site
Required Skills and Qualifications: 3–4 years of hands-on experience in Google Cloud Platform (GCP) . 1–2 years of working experience with SAP BODS , particularly in data flow development, testing, and deployment. Strong understanding of SQL/NoSQL database systems. Hands-on experience with big data technologies (Hadoop, Spark, Kafka). Strong scripting and programming skills in Python , Java , Scala , or similar. Working knowledge of Linux systems. Hands-on experience with Databricks including Unity Catalog and performance optimization. Familiarity with DBT , Airflow , and other transformation/orchestration tools. Solid understanding of data pipeline architecture , workflow orchestration , and data engineering best practices . Exposure to containerization tools such as Docker/Kubernetes (preferred). Experience working with RESTful APIs or Data as a Service models (preferred). Excellent communication and collaboration skills. Nice to Have: Experience with Azure or AWS cloud services in addition to GCP. Background in working within agile teams or DevOps environments. Knowledge of data governance and security best practices in cloud data platforms. Show more Show less
Posted 1 week ago
5.0 - 10.0 years
0 - 3 Lacs
Hyderabad
Work from Office
Job Description Designation: Lead Software Engineer - Java Place of work: Hyderabad About this Job: We are looking for a passionate Java developer who is familiar with customizing and integrating third-party applications, software implementation lifecycle and attentive to detail and good programming and problem-solving skills. Responsibilities: • Relevant Experience implementing, customizing and/or integrating third-party applications within business enterprise software. • Understand software implementation lifecycle (e.g., analyze, design, build, test, implement, support). • Excellent interpersonal, communication and analytical skills and a demonstrable bias toward action • Programming experience in Java, Web Services (RESTful, SOAP), SQL. • Experience developing business application, system integration and/or IT development. • Debugging and resolving technical problems that arise. • Producing detailed design documentation. • Recommending changes to existing Java infrastructure. • Developing multimedia applications. • Developing documentation to assist users. • Ensuring continuous professional self-development. Desired Profile: • Strong knowledge on OOPS concepts. • Good communication skills. • Strong in programming skills • Should be familiar with multi-threading and exception handling. • Strong knowledge on collection. • Strong knowledge on JSP, servlets and jQuery. • Should be familiar with frameworks like Spring MVC, Spring Boot, Hibernate. • Should be familiar with Mongo, ES. • Strong knowledge on spring boot configurations. Graduation: B Tech/M Tech/MCA/BCA/B.Sc. (Computers)
Posted 1 week ago
2.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Description Invent the future with us. Recognized by Fast Company’s 2023 100 Best Workplaces for Innovators List, Ampere is a semiconductor design company for a new era, leading the future of computing with an innovative approach to CPU design focused on high-performance, energy efficient, sustainable cloud computing. By providing a new level of predictable performance, efficiency, and sustainability Ampere is working with leading cloud suppliers and a growing partner ecosystem to deliver cloud instances, servers and embedded/edge products that can handle the compute demands of today and tomorrow. Join us at Ampere and work alongside a passionate and growing team — we’d love to have you apply! About The Role Ampere Computing’s Enterprise Data and AI Team is seeking a Data Engineer proficient in modern data tools within the Azure environment. In this highly collaborative role, you will design, develop, and maintain data pipelines and storage solutions that support our business objectives. This position offers an excellent opportunity to enhance your technical skills, work on impactful projects, and grow your career in data engineering within a supportive and innovative environment. What You’ll Achieve Data Pipeline Development: Design, develop, and maintain data pipelines using Azure technologies such as Azure Data Factory, Azure Databricks, and Azure Synapse Analytics. Data Modeling: Collaborate with senior engineers to create and optimize data models that support business intelligence and analytics requirements. Data Storage Solutions: Implement and manage data storage solutions using Azure Data Lake Storage (Gen 2) and Cosmos DB. Coding and Scripting: Write efficient and maintainable code in Python, Scala, or PySpark for data transformation and processing tasks. Collaboration: Work closely with cross-functional teams to understand data requirements and deliver robust data solutions. Continuous Learning: Stay updated with the latest Azure services and data engineering best practices to continuously enhance technical skills. Support and Maintenance: Provide ongoing support for existing data infrastructure, troubleshoot issues, and implement improvements as needed. Documentation: Document data processes, architecture, and workflows to ensure clarity and maintainability. About You Bachelor's degree in Computer Science, Information Technology, Engineering, Data Science, or a related field. 2+ years of experience in a data-related role. Proficiency with Azure data services (e.g., Databricks, Synapse Analytics, Data Factory, Data Lake Storage Gen2). Working knowledge of SQL and at least one programming language (e.g., Python, Scala, PySpark). Strong analytical and problem-solving skills with the ability to translate complex data into actionable insights. Excellent communication skills, with the ability to explain technical concepts to diverse audiences. Experience with data warehousing concepts, ETL processes, and version control systems (e.g., Git). Familiarity with Agile methodologies. What We’ll Offer At Ampere we believe in taking care of our employees and providing a competitive total rewards package that includes base pay, bonus (i.e., variable pay tied to internal company goals), long-term incentive, and comprehensive benefits. Benefits Highlights Include Premium medical, dental, vision insurance, parental benefits including creche reimbursement, as well as a retirement plan, so that you can feel secure in your health, financial future and child care during work. Generous paid time off policy so that you can embrace a healthy work-life balance Fully catered lunch in our office along with a variety of healthy snacks, energizing coffee or tea, and refreshing drinks to keep you fueled and focused throughout the day. And there is much more than compensation and benefits. At Ampere, we foster an inclusive culture that empowers our employees to do more and grow more. We are passionate about inventing industry leading cloud-native designs that contribute to a more sustainable future. We are excited to share more about our career opportunities with you through the interview process. Ampere is an inclusive and equal opportunity employer and welcomes applicants from all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, national origin, citizenship, religion, age, veteran and/or military status, sex, sexual orientation, gender, gender identity, gender expression, physical or mental disability, or any other basis protected by federal, state or local law. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing... Design, build and maintain robust, scalable data pipelines and ETL processes Ensure high data quality, accuracy and integrity across all systems. Work with structure and unstructured data from multiple sources. Optimize data work flows for performance, reliability, and cost efficiency. Collaborate with analysts, data scientists to meet data needs Monitor,troubleshoot, and improve existing data systems and jobs Apply best practices in data governance, security and compliance . Use tools like Spark, Kafka, Airflow, SQL, Python and cloud platforms Stay updated with emerging technologies and continuously improve data infrastructure. What we’re looking for… You Will Need To Have Bachelor's degree or four or more years of work experience. Expertise in AWS Data Stack – Strong hands-on experience with S3, Glue, EMR, Lambda, Kinesis, Redshift, Athena, and IAM security best practices. Big Data & Distributed Computing – Deep understanding of Apache Spark (batch and streaming) large-scale data processing and analytics. Real-Time & Batch Data Processing – Proven experience designing, implementing, and optimizing event-driven and streaming data pipelines using Kafka and Kinesis. ETL/ELT & Data Modeling – Strong experience in architecting and optimizing scalable ETL/ELT pipelines for structured and unstructured data. Programming Skills – Proficiency in Scala and Java for data processing and automation. Database & SQL Optimization – Strong understanding of SQL and experience with relational (PostgreSQL, MySQL). Expertise in SQL query tuning, data warehousing and working with Parquet, Avro, ORC formats. Infrastructure as Code (IaC) & DevOps – Experience with CloudFormation, CDK, and CI/CD pipelines for automated deployments in AWS. Monitoring, Logging & Observability – Familiarity with AWS CloudWatch, Prometheus, or similar monitoring tools. API Integration – Ability to fetch and process data from external APIs and databases. Architecture & Scalability Mindset – Ability to design and optimize data architectures for high-volume, high-velocity, and high-variety datasets. Performance Optimization – Experience in optimizing data pipelines for cost and performance. Cross-Team Collaboration – Work closely with Data Scientists, Analysts, DevOps, and Business Teams to deliver end-to-end data solutions. Even better if you have one or more of the following: Agile & CI/CD Practices – Comfortable working in Agile/Scrum environments, driving continuous integration and continuous deployment. #TPDRNONCDIO Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less
Posted 1 week ago
5.0 - 10.0 years
9 - 19 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Job description Hiring for Bigdata Developer with experience range 5 to 15 years. Mandatory Skills: Bigdata, Scala, Spark, Hive, Kafka Education: BE/B.Tech/MCA/M.Tech/MSc./MSts
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Basic Qualifications: Bachelor’s or Master’s degree in a Computer Science, Engineering or a related or related field of study 5+ Years - Ability to work effectively across organizations, product teams and business partners. 5+ Years - Knowledge Agile (Scrum) Methodology, experience in writing user stories 5+ Years - Strong understating of Database concepts and experience with multiple database technologies – optimizing query and data processing performance. 5+ Years - Full Stack Data Engineering Competency in a public cloud – Google, MS Azure, AWS Critical thinking skills to propose data solutions, test, and make them a reality. 5+ Years - Highly Proficient in SQL, Python, Java, Scala, or Go (or similar) - Experience programming engineering transformation in Python or a similar language. 5+ Years Demonstrated ability to lead data engineering projects, design sessions and deliverables to successful completion. Cloud native technologist Deep understanding of data service ecosystems including data warehousing, lakes, metadata, meshes, fabrics and AI/ML use cases. User experience advocacy through empathetic stakeholder relationship. Effective Communication both internally (with team members) and externally (with stakeholders) Knowledge of Data Warehouse concepts – experience with Data Warehouse/ ETL processes Strong process discipline and thorough understating of IT processes (ISP, Data Security). Responsibilities Responsibilities: Interact with GDIA product lines and business partners to understand data engineering opportunities, tooling and needs. Collaborate with Data Engineering and Data Architecture to design and build templates, pipelines and data products including automation, transformation and curation using best practices Develop custom cloud solutions and pipelines with GCP native tools – Data Prep, Data Fusion, Data Flow, DBT and Big Query Operationalize and automate data best practices: quality, auditable, timeliness and complete Participate in design reviews to accelerate the business and ensure scalability Work with Data Engineering and Architecture and Data Platform Engineering to implement strategic solutions Advise and direct team members and business partners on Ford standards and processes. Qualifications Preferred Qualifications: Excellent communication, collaboration and influence skills; ability to energize a team. Knowledge of data, software and architecture operations, data engineering and data management standards, governance and quality Hands on experience in Python using libraries like NumPy, Pandas, etc. Extensive knowledge and understanding of GCP offerings, bundled services, especially those associated with data operations Cloud Console, BigQuery, DataFlow, DataFusion, PubSub / Kafka, Looker Studio, VertexAI Experience with Teradata, Hadoop, Hive, Spark and other parts of legacy data platform Experience with recoding, re-developing and optimizing data operations, data science and analytical workflows and products. Data Governance concepts including GDPR (General Data Protection Regulation), CCPA (California Consumer Protection Act), PoLP and how these can impact technical architecture Show more Show less
Posted 1 week ago
5.0 - 8.0 years
7 - 10 Lacs
Hyderabad
Work from Office
Grade Level (for internal use): 10 Market Intelligence The Role: Senior Full Stack Developer Grade level :10 The Team: You will work with a team of intelligent, ambitious, and hard-working software professionals. The team is responsible for the architecture, design, development, quality, and maintenance of the next-generation financial data web platform. Other responsibilities include transforming product requirements into technical design and implementation. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts, and Infrastructure Teams The Impact: Market Intelligence is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. Whats in it for you: Opportunity to contribute to the development of a world-class Platform Engineering team . Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of dataideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies including AWS Cloud , EMR and Apache NiFi . Grow your career within a globally distributed team , with clear opportunities for advancement and skill development. Responsibilities: Design and develop applications, components, and common services based on development models, languages and tools, including unit testing, performance testing and monitoring and implementation Support business and technology teams as necessary during design, development and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk, Spark ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially written with the business and other technical groups What Were Looking For: Basic Qualifications: BachelorsMasters Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Advance SQL programming skills Preferred experience or familiarity with tools and technologies such as Odata, Grafana, Kibana, Big Data platforms, Apache Kafka, GitHub, AWS EMR, Terraform, and emerging areas like AI/ML and GitHub Copilot. Highly recommended skillset in Databricks, SPARK, Scalatechnologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to have: No-SQL (Databricks, Spark, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications: Hands-on experience with cloud computing platforms including AWS , Azure , or Google Cloud Platform (GCP) . Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing. Benefits: Health & Wellness: Health care coverage designed for the mind and body. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: Its not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awardssmall perks can make a big difference.
Posted 1 week ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Title: AWS Data Engineer Job Summary We are looking for a skilled Data Engineer to join our team and help build, deploy, and maintain data pipelines and data systems using AWS technologies. In this role, you will be responsible for designing and implementing data integration processes, ensuring data quality, and providing technical expertise in data modeling, ETL, and business intelligence. You will work collaboratively with cross-functional teams to support a range of data-driven projects and initiatives. Responsibilities Data Pipeline Development: Design, develop, test, and deploy data integration processes using AWS services (e.g., Redshift, RDS, Glue, S3, Lambda, Kinesis, Step Function) and other tools. Data Integration: Build and manage data pipelines for batch and real-time data processing. Develop and maintain data models to support business needs and ensure data accuracy. Documentation: Create and maintain technical documentation, including ETL architecture, data integration specifications, and data testing plans. Collaboration: Work with business users and stakeholders to understand data requirements, create data flows, and develop conceptual, logical, and physical data models. Technology Utilization: Leverage AWS technologies and best practices to optimize data processing and integration. Stay current with emerging technologies and recommend innovative solutions. Data Quality: Ensure the accuracy, reliability, and performance of data systems. Implement testing strategies and automation to maintain data integrity. Support & Maintenance: Provide support for data systems and pipelines, troubleshooting and resolving issues as they arise. Perform regular maintenance and updates to ensure optimal performance. Reporting & Analytics: Collaborate with the reporting team to design and implement data solutions that support business intelligence and analytics. Required Qualifications Experience: 4+ years of hands-on experience as a Data Engineer with a strong focus on AWS technologies (e.g., EMR, Redshift, RDS, Glue, S3, Lambda, Athena, Kinesis & Cloud Watch). Technical Skills: Proficient in Python and SQL script creation, and data integration tools. Experience with data modeling and ETL processes. Programming Skills: Experience in programming languages such as Python, PySpark, or Scala. Database Platforms: Experience with major database platforms (e.g., SQL Server, Oracle, Snowflake, Redshift). Orchestration & Automation: Familiarity with orchestration tools (e.g., AWS Data Pipeline, Step Functions), infrastructure automation (e.g. Terraform, CloudFormation) and CI/CD pipelines (e.g. Jenkins, GitLab CI/CD). Build & Test Tools: Working knowledge of build tools (e.g., Maven, Gradle) and testing frameworks (e.g., JUnit, pytest). Documentation: Ability to create comprehensive technical documentation and maintain clear records of data processes and systems. Education: Bachelor’s degree in Computer Science, Information Systems, or a related field. Preferred Skills & Experience Big Data Frameworks: Experience with big data frameworks (e.g., Spark, Hadoop) and related technologies (e.g., pySpark, SparkSQL). Data Integration Processes: Knowledge of data warehousing, data integration, and ETL processes. Communication: Strong communication skills, with the ability to effectively collaborate with team members and stakeholders. Problem-Solving: Demonstrated ability to troubleshoot complex data issues and implement effective solutions. Agile Environment: Experience working in an agile development environment with tools like Azure DevOps or JIRA. If you are a motivated data professional with experience in AWS and a passion for solving complex data challenges, we encourage you to apply for this exciting opportunity. Show more Show less
Posted 1 week ago
10.0 - 15.0 years
30 - 45 Lacs
Mumbai, Navi Mumbai, Gurugram
Work from Office
Role Description We are looking for a suitable candidate for the opening of Data/Technical Architect role for Data Management, preferably for one who has worked in Insurance or Banking and Financial Services domain and holds relevant experience of 10+ years. The candidate should be willing to take up the role of Senior Manager/Associate Director in an organization based on overall experience. Location : Mumbai and Gurugram Relevant experience : 10+ years Key Responsibilities: Provide technical leadership regarding data strategy and roadmap exercises, data architecture definition, business intelligence/data warehouse product selection, design and implementation for the enterprise. Proven track record of success in implementations for Data Lake, Data Warehouse/Data Marts, Data Lakehouse on Cloud Data Platform. Hands on experience in leading large-scale global data warehousing and analytics projects. Demonstrated industry leadership in the fields of database, data warehousing or data sciences. Be accountable for creating end-to-end solution design and development approach on Cloud Platform including sizing and TCO. Should have Deep technical expertise on Cloud Data Components but not limed to Cloud Storage (S3/ADLS/GCS), EMR/Data Bricks, Redshift/Synapse/Big Query, Glue/Azure Data Factory/Data Fusion/Data Flow, Cloud Functions, Event Bridge, etc. NoSQL understanding and use case application DynamoDB, Cosmos DB or any other technology. Should have worked extensively in creating re-usable assets for Data Integration, transformation, auditing and validation frameworks Knowledge of any Scripting/Programming skills Python, Java, Scala, Go Implementation and tuning experience of data warehousing platforms, including knowledge of data warehouse schema design, query tuning and optimization, and data migration and integration. Experience of requirements for the analytics presentation layer including dashboards, reporting, and OLAP Extensive experience in designing Data architecture, data modeling, design, development, data migration and data integration aspects of SDLC. Participate and/or lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates Should have experience designing new or enhancing existing architecture frameworks and implementing them in a cooperative and collaborative setting Troubleshooting skills, ability to determine impacts, ability to resolve complex issues, and initiative in stressful situations Contributed significantly in Business Development activities. Strong oral and written communication and interpersonal skills Working experience on Agile & Scrum methods Develop documentation and maintain as needed Support projects by providing SME knowledge to project teams in the areas of Enterprise Data Management Interested candidates please share your Cvs to mudesh.kumar.tpr@pwc.com
Posted 1 week ago
8.0 years
0 Lacs
Greater Hyderabad Area
On-site
Job Title: Data Engineering Lead Job Type: Full-time Location: Hyderabad Expected Joining Time: Immediate to 30 days Job Description We are looking for an accomplished and dynamic Data Engineering Lead to join our team and drive the design, development, and delivery of cutting-edge data solutions. This role requires a balance of strong technical expertise, strategic leadership, and a consulting mindset. As the Lead Data Engineer, you will oversee the design and development of robust data pipelines and systems, manage and mentor a team of 5 to 7 engineers, and play a critical role in architecting innovative solutions tailored to client needs. You will lead by example, fostering a culture of accountability, ownership, and continuous improvement while delivering impactful, scalable data solutions in a fast-paced, consulting environment. Key Responsibilities Client Collaboration: Act as the primary point of contact for US-based clients, ensuring alignment on project goals, timelines, and deliverables. Engage with stakeholders to understand requirements and ensure alignment throughout the project lifecycle. Present technical concepts and designs to both technical and non-technical audiences. Communicate effectively with stakeholders to ensure alignment on project goals, timelines, and deliverables. Set realistic expectations with clients and proactively address concerns or risks. Data Solution Design and Development: Architect, design, and implement end-to-end data pipelines and systems that handle large-scale, complex datasets. Ensure optimal system architecture for performance, scalability, and reliability. Evaluate and integrate new technologies to enhance existing solutions. Implement best practices in ETL/ELT processes, data integration, and data warehousing. Project Leadership and Delivery: Lead technical project execution, ensuring timelines and deliverables are met with high quality. Collaborate with cross-functional teams to align business goals with technical solutions. Act as the primary point of contact for clients, translating business requirements into actionable technical strategies. Team Leadership and Development: Manage, mentor, and grow a team of 5 to 7 data engineers; Ensure timely follow-ups on action items and maintain seamless communication across time zones. Conduct code reviews, validations, and provide feedback to ensure adherence to technical standards. Provide technical guidance and foster an environment of continuous learning, innovation, and collaboration. Support collaboration and alignment between the client and delivery teams. Optimization and Performance Tuning: Be hands-on in developing, testing, and documenting data pipelines and solutions as needed. Analyze and optimize existing data workflows for performance and cost-efficiency. Troubleshoot and resolve complex technical issues within data systems. Adaptability and Innovation: Embrace a consulting mindset with the ability to quickly learn and adopt new tools, technologies, and frameworks. Identify opportunities for innovation and implement cutting-edge technologies in data engineering. Exhibit a "figure it out" attitude, taking ownership and accountability for challenges and solutions. Learning and Adaptability: Stay updated with emerging data technologies, frameworks, and tools. Actively explore and integrate new technologies to improve existing workflows and solutions. Internal Initiatives and Eminence Building: Drive internal initiatives to improve processes, frameworks, and methodologies. Contribute to the organization’s eminence by developing thought leadership, sharing best practices, and participating in knowledge-sharing activities. Qualifications Education: Bachelor’s or master’s degree in computer science, Data Engineering, or a related field. Certifications in cloud platforms such as Snowflake Snowpro, Data Engineer is a plus. Experience: 8+ years of experience in data engineering with hands-on expertise in data pipeline development, architecture, and system optimization. Demonstrated success in managing global teams, especially across US and India time zones. Proven track record in leading data engineering teams and managing end-to-end project delivery. Strong background in data warehousing and familiarity with tools such as Matillion, dbt, Striim, etc. Technical Skills: Lead the design, development, and deployment of scalable data architectures, pipelines, and processes tailored to client needs Expertise in programming languages such as Python, Scala, or Java. Proficiency in designing and delivering data pipelines in Cloud Data Warehouses (e.g., Snowflake, Redshift), using various ETL/ELT tools such as Matillion, dbt, Striim, etc. Solid understanding of database systems (relational and NoSQL) and data modeling techniques. Hands-on experience of 2+ years in designing and developing data integration solutions using Matillion and/or dbt. Strong knowledge of data engineering and integration frameworks. Expertise in architecting data solutions. Successfully implemented at least two end-to-end projects with multiple transformation layers. Good grasp of coding standards, with the ability to define standards and testing strategies for projects. Proficiency in working with cloud platforms (AWS, Azure, GCP) and associated data services. Enthusiastic about working in Agile methodology. Possess a comprehensive understanding of the DevOps process including GitHub integration and CI/CD pipelines. Soft Skills: Exceptional problem-solving and analytical skills. Strong communication and interpersonal skills to manage client relationships and team dynamics. Ability to thrive in a consulting environment, quickly adapting to new challenges and domains. Ability to handle ambiguity and proactively take ownership of challenges. Demonstrated accountability, ownership, and a proactive approach to solving problems. Why Join Us? Be at the forefront of data innovation and lead impactful projects. Work with a collaborative and forward-thinking team. Opportunity to mentor and develop talent in the data engineering space. Competitive compensation and benefits package. A dynamic environment where your contributions directly shape the future of data driven decision-making. About Us Logic Pursuits provides companies with innovative technology solutions for everyday business problems. Our passion is to help clients become intelligent, information-driven organizations, where fact-based decision-making is embedded into daily operations, which leads to better processes and outcomes. Our team combines strategic consulting services with growth-enabling technologies to evaluate risk, manage data, and leverage AI and automated processes more effectively. With deep, big four consulting experience in business transformation and efficient processes, Logic Pursuits is a game-changer in any operations strategy. Show more Show less
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Kochi, Kerala, India
On-site
Role Description UST is looking for a talented GCP Data Engineer with 5 to 10 years of experience to join our team and play a crucial role in designing and implementing efficient data solutions on the Google Cloud Platform (GCP). The ideal candidate should possess strong data engineering skills, expertise in GCP services, and proficiency in data processing technologies, particularly PySpark. Responsibilities Data Pipeline Development: Design, implement, and optimize end-to-end data pipelines on GCP, focusing on scalability and performance. Develop and maintain ETL workflows for seamless data processing. GCP Cloud Expertise Utilize GCP services such as BigQuery, Cloud Storage, and Dataflow for effective data engineering. Implement and manage data storage solutions on GCP. Data Transformation With PySpark Leverage PySpark for advanced data transformations, ensuring high-quality and well-structured output. Implement data cleansing, enrichment, and validation processes using PySpark. Requirements Proven experience as a Data Engineer, with a strong emphasis on GCP. Proficiency in GCP services such as BigQuery, Cloud Storage, and Dataflow. Expertise in PySpark for data processing and analytics is a must. Experience with data modeling, ETL processes, and data warehousing. Proficiency in programming languages such as Python, SQL, or Scala for data processing. Relevant certifications in GCP or data engineering are plus. Skills GCP, PySpark Show more Show less
Posted 1 week ago
3.0 - 6.0 years
5 - 8 Lacs
Nagercoil
Work from Office
Managing Sales of Loan Against Property & Business Loans for the Ameerpet Region. Lead a team of Relationship Managers to generate business through Direct sourcing. Building the Sales and distribution network in the assigned territory. Recruit, train and monitor team members & ensuring quality service delivery. Managing loan process from lead generation till disbursement of the loan. Ensure synergy between sales, credit and operation to ensure the efficiency of business processes
Posted 1 week ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Opkey , we are disrupting the space of ERP transformation testing by building an AI-powered No Code Testing platform for Enterprise business applications (like Oracle Fusion Cloud, SAP S4Hana, SAP, Workday, Salesforce, and the likes). Opkey is a fast-growing VC-backed continuous end-to-end test automation software company headquartered in Dublin, California, with additional offices in Pittsburgh (opened in 2022), NYC (opened in 2022), & India (Noida & Bangalore). With the test automation market growing 20% annually, it's estimated to reach $50 billion by 2026. Trusted by 250+ enterprise customers, including GAP, Pfizer, and KPMG. We are seeking a highly skilled Lead AI Engineer with 8+ years of experience, preferably from SaaS and product-based companies, to drive our AI initiatives from ideation to deployment. You will work closely with cross-functional teams to design, develop, and scale innovative AI solutions that power our next-generation platforms. Key Responsibilities Architect, build, and deploy AI/ML models for SaaS products at scale. Lead the end-to-end lifecycle of AI projects — from data exploration, model development, validation, and deployment, to monitoring and maintenance. Collaborate with Product Management, Engineering, and Design teams to integrate AI capabilities into product offerings. Implement and optimize Retrieval-Augmented Generation (RAG) systems, Large Language Models (LLMs), and other emerging AI/ML techniques. Define and uphold best practices in AI model development, MLOps, and scalable deployment. Mentor and guide a team of AI/ML engineers, setting technical direction and fostering a culture of innovation and excellence. Partner with stakeholders to define AI strategies aligned with overall technology roadmaps and business objectives. Stay abreast of advancements in AI and contribute thought leadership internally and externally. Required Skills & Experience 10-12 years of experience in AI/ML engineering, with a strong record of working in SaaS and product-based environments. Expertise in Machine Learning, Deep Learning, Natural Language Processing (NLP), Computer Vision, and/or Generative AI. Hands-on experience with frameworks like TensorFlow, PyTorch, Hugging Face Transformers, etc. Solid experience in designing scalable AI architectures and deploying models in production environments (AWS, GCP, Azure, etc.). Strong programming skills in Python; familiarity with other languages like Java, Go, or Scala is a plus. Deep understanding of MLOps, CI/CD for machine learning pipelines, containerization (Docker, Kubernetes). Experience with LLM fine-tuning, prompt engineering, vector databases (e.g., Pinecone, FAISS) is highly desirable. [Non-negotiable Requisite] Has experience is having trained a SLM (or Medium Language Model), for a particular vertical. This includes pre-training and fine-tuning. Strong problem-solving skills and ability to navigate ambiguous technical challenges. Excellent communication, leadership, and stakeholder management skills. Preferred Qualifications Master’s or Ph.D. in Computer Science, Machine Learning, Data Science, or a related field. Experience in Go-to-Market Strategy for AI-powered products. Experience integrating AI into customer-facing SaaS products with measurable outcomes. Contributions to open-source AI projects or published research papers Skills: tensorflow,hugging face transformers,ci/cd,faiss,pinecone,natural language processing (nlp),retrieval-augmented generation (rag),machine learning,scala,java,traning,python,slm,go,computer vision,pytorch,generative ai,vector databases,kubernetes,reinforncement learning,deep learning,mlops,docker,language model,large language models (llms) Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. Machine Learning Engineer (T24), Product Knowledge Do you love Big Data? Deploying Machine Learning models? Challenging optimization problems? Knowledgeable, collaborative co-workers? Come work at eBay and help us redefine global, online commerce! Who Are We? The Product Knowledge team is at the epicenter of eBay’s Tech-driven, Customer-centric overhaul. Our team is entrusted with creating and using eBay’s Product Knowledge - a vast Big Data system which is built up of listings, transactions, products, knowledge graphs, and more. Our team has a mix of highly proficient people from multiple fields such as Machine Learning, Data Science, Software Engineering, Operations, and Big Data Analytics. We have a strong culture of collaboration, and plenty of opportunity to learn, make an impact, and grow! What Will You Do We Are Looking For Exceptional Engineers, Who Take Pride In Creating Simple Solutions To Apparently-complex Problems. Our Engineering Tasks Typically Involve At Least One Of The Following Building a pipeline that processes up to billions of items, frequently employing ML models on these datasets Creating services that provide Search or other Information Retrieval capabilities at low latency on datasets of hundreds of millions of items Crafting sound API design and driving integration between our Data layers and Customer-facing applications and components Designing and running A/B tests in Production experiences in order to vet and measure the impact of any new or improved functionality If you love a good challenge, and are good at handling complexity - we’d love to hear from you! eBay is an amazing company to work for. Being on the team, you can expect to benefit from: A competitive salary - including stock grants and a yearly bonus A healthy work culture that promotes business impact and at the same time highly values your personal well-being Being part of a force for good in this world - eBay truly cares about its employees, its customers, and the world’s population, and takes every opportunity to make this clearly apparent Job Responsibilities Design, deliver, and maintain significant features in data pipelines, ML processing, and / or service infrastructure Optimize software performance to achieve the required throughput and / or latency Work with your manager, peers, and Product Managers to scope projects and features Come up with a sound technical strategy, taking into consideration the project goals, timelines, and expected impact Take point on some cross-team efforts, taking ownership of a business problem and ensuring the different teams are in sync and working towards a coherent technical solution Take active part in knowledge sharing across the organization - both teaching and learning from others Minimum Qualifications Passion and commitment for technical excellence B.Sc. or M.Sc. in Computer Science or an equivalent professional experience 2+ years of software design and development experience, tackling non-trivial problems in backend services and / or data pipelines A solid foundation in Data Structures, Algorithms, Object-Oriented Programming, Software Design, and core Statistics knowledge Experience in production-grade coding in Java, and Python/Scala Experience in the close examination of data and computation of statistics Experience in using and operating Big Data processing pipelines, such as: Hadoop and Spark Good verbal and written communication and collaboration skills Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information. Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position: Principal Data Engineer Experience: Must have 8+ years of experience About Role: We are looking for experienced Data engineers with excellent problem-solving skills to develop machine-learning powered Data Products design to enhance customer experiences. About us: Nurtured from the seed of a single great idea - to empower the traveler - MakeMyTrip went on to pioneer India’s online travel industry Founded in the year 2000 by Deep Kalra, MakeMyTrip has since transformed how India travels. One of our most memorable moments has been to ring the bell at NASDAQ in 2010. Post-merger with the Ibibo group in 2017, we created a stronger identity and traction for our portfolio of brands, increasing the pace of product and technology innovations. Ranked amongst the LinkedIn Top 25 companies 2018. GO-MMT is the corporate entity of three giants in the Online Travel Industry—Goibibo, MakeMyTrip and RedBus. The GO-MMT family celebrates the compounded strengths of their brands. The group company is easily the most sought after corporate in the online travel industry. About the team: MakeMyTrip as India’s leading online travel company and provides petabytes of raw data which is helpful for business growth, analytical and machine learning needs. Data Platform Team is a horizontal function at MakeMyTrip to support various LOBs (Flights, Hotels, Holidays, Ground) and works heavily on streaming datasets which powers personalized experiences for every customer from recommendations to in-location engagement. There are two key responsibilities of Data Engineering team: One to develop the platform for data capture, storage, processing, serving and querying. Second is to develop data products starting from; o personalization & recommendation platform o customer segmentation & intelligence o data insights engine for persuasions and o the customer engagement platform to help marketers craft contextual and personalized campaigns over multi-channel communications to users We developed Feature Store, an internal unified data analytics platform that helps us to build reliable data pipelines, simplify featurization and accelerate model training. This enabled us to enjoy actionable insights into what customers want, at scale, and to drive richer, personalized online experiences. Technology experience : Extensive experience working with large data sets with hands-on technology skills to design and build robust data architecture Extensive experience in data modeling and database design At least 6+ years of hands-on experience in Spark/BigData Tech stack Stream processing engines – Spark Structured Streaming/Flink Analytical processing on Big Data using Spark At least 6+ years of experience in Scala Hands-on administration, configuration management, monitoring, performance tuning of Spark workloads, Distributed platforms, and JVM based systems At least 2+ years of cloud deployment experience – AWS | Azure | Google Cloud Platform At least 2+ product deployments of big data technologies – Business Data Lake, NoSQL databases etc Awareness and decision making ability to choose among various big data, no sql, and analytics tools and technologies Should have experience in architecting and implementing domain centric big data solutions Ability to frame architectural decisions and provide technology leadership & direction Excellent problem solving, hands-on engineering, and communication skills Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Position: Senior Principal Data Engineer Experience: Must have 10+ years of experience About Role: We are looking for experienced Data engineers with excellent problem-solving skills to develop machine-learning powered Data Products design to enhance customer experiences. About us: Nurtured from the seed of a single great idea - to empower the traveler - MakeMyTrip went on to pioneer India’s online travel industry Founded in the year 2000 by Deep Kalra, MakeMyTrip has since transformed how India travels. One of our most memorable moments has been to ring the bell at NASDAQ in 2010. Post-merger with the Ibibo group in 2017, we created a stronger identity and traction for our portfolio of brands, increasing the pace of product and technology innovations. Ranked amongst the LinkedIn Top 25 companies 2018. GO-MMT is the corporate entity of three giants in the Online Travel Industry—Goibibo, MakeMyTrip and RedBus. The GO-MMT family celebrates the compounded strengths of their brands. The group company is easily the most sought after corporate in the online travel industry. About the team: MakeMyTrip as India’s leading online travel company and provides petabytes of raw data which is helpful for business growth, analytical and machine learning needs. Data Platform Team is a horizontal function at MakeMyTrip to support various LOBs (Flights, Hotels, Holidays, Ground) and works heavily on streaming datasets which powers personalized experiences for every customer from recommendations to in-location engagement. There are two key responsibilities of Data Engineering team: One to develop the platform for data capture, storage, processing, serving and querying. Second is to develop data products starting from; o personalization & recommendation platform o customer segmentation & intelligence o data insights engine for persuasions and o the customer engagement platform to help marketers craft contextual and personalized campaigns over multi-channel communications to users We developed Feature Store, an internal unified data analytics platform that helps us to build reliable data pipelines, simplify featurization and accelerate model training. This enabled us to enjoy actionable insights into what customers want, at scale, and to drive richer, personalized online experiences. Technology experience : Extensive experience working with large data sets with hands-on technology skills to design and build robust data architecture Extensive experience in data modeling and database design At least 6+ years of hands-on experience in Spark/BigData Tech stack Stream processing engines – Spark Structured Streaming/Flink Analytical processing on Big Data using Spark At least 6+ years of experience in Scala Hands-on administration, configuration management, monitoring, performance tuning of Spark workloads, Distributed platforms, and JVM based systems At least 2+ years of cloud deployment experience – AWS | Azure | Google Cloud Platform At least 2+ product deployments of big data technologies – Business Data Lake, NoSQL databases etc Awareness and decision making ability to choose among various big data, no sql, and analytics tools and technologies Should have experience in architecting and implementing domain centric big data solutions Ability to frame architectural decisions and provide technology leadership & direction Excellent problem solving, hands-on engineering, and communication skills Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position: Senior Data Engineer II Experience: Must have 4+ years of experience About Role: We are looking for experienced Data engineers with excellent problem-solving skills to develop machine-learning powered Data Products design to enhance customer experiences. About us: Nurtured from the seed of a single great idea - to empower the traveler - MakeMyTrip went on to pioneer India’s online travel industry Founded in the year 2000 by Deep Kalra, MakeMyTrip has since transformed how India travels. One of our most memorable moments has been to ring the bell at NASDAQ in 2010. Post-merger with the Ibibo group in 2017, we created a stronger identity and traction for our portfolio of brands, increasing the pace of product and technology innovations. Ranked amongst the LinkedIn Top 25 companies 2018. GO-MMT is the corporate entity of three giants in the Online Travel Industry—Goibibo, MakeMyTrip and RedBus. The GO-MMT family celebrates the compounded strengths of their brands. The group company is easily the most sought after corporate in the online travel industry. About the team: MakeMyTrip as India’s leading online travel company and provides petabytes of raw data which is helpful for business growth, analytical and machine learning needs. Data Platform Team is a horizontal function at MakeMyTrip to support various LOBs (Flights, Hotels, Holidays, Ground) and works heavily on streaming datasets which powers personalized experiences for every customer from recommendations to in-location engagement. There are two key responsibilities of Data Engineering team: One to develop the platform for data capture, storage, processing, serving and querying. Second is to develop data products starting from; o personalization & recommendation platform o customer segmentation & intelligence o data insights engine for persuasions and o the customer engagement platform to help marketers craft contextual and personalized campaigns over multi-channel communications to users We developed Feature Store, an internal unified data analytics platform that helps us to build reliable data pipelines, simplify featurization and accelerate model training. This enabled us to enjoy actionable insights into what customers want, at scale, and to drive richer, personalized online experiences. Technology experience : Extensive experience working with large data sets with hands-on technology skills to design and build robust data architecture Extensive experience in data modeling and database design At least 4+ years of hands-on experience in PySpark/BigData Tech stack Stream processing engines – Spark Structured Streaming Analytical processing on Big Data using Spark At least 4+ years of experience in Python/Scala Hands-on administration, configuration management, monitoring, performance tuning of Spark workloads, Distributed platforms, and JVM based systems At least 4+ years of cloud deployment experience – AWS | Azure | Google Cloud Platform At least 4+ product deployments of big data technologies – Business Data Lake, NoSQL databases etc Awareness and decision making ability to choose among various big data, no sql, and analytics tools and technologies Should have experience in architecting and implementing domain centric big data solutions Ability to frame architectural decisions and provide technology leadership & direction Excellent problem solving, hands-on engineering, and communication skills Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
We at MakeMyTrip understand that every traveller is unique and being the leading OTA in India we have the leverage to redefine the travel booking experience to meet their need. If you love to travel and want to be a part of a dynamic team that works on personalizing every user's journey, then look no further. We are looking for a brilliant mind like yours to join our Data Platform team to build exciting data products at a scale where we solve for industry best and fault-tolerant feature stores, real-time data pipelines, catalogs, and much more. Hands-on: Spark, Scala Technologies: Spark, Aerospike, DataBricks, Kafka, Debezium, EMR, Athena, Glue, RocksDB, Redis, Airflow, MySQL, and any other data sources (e.g. Mongo, Neo4J, etc) used by other teams. Location: Gurgaon/Bengaluru Experience: 6+ years Industry Preference: E-Commerce Show more Show less
Posted 1 week ago
5.0 - 8.0 years
7 - 10 Lacs
Bengaluru
Work from Office
Job Summary Person at this position takes ownership of a module and associated quality and delivery. Person at this position provides instructions, guidance and advice to team members to ensure quality and on time delivery. Person at this position is expected to be able to instruct and review the quality of work done by technical staff. Person at this position should be able to identify key issues and challenges by themselves, prioritize the tasks and deliver results with minimal direction and supervision. Person at this position has the ability to investigate the root cause of the problem and come up alternatives/ solutions based on sound technical foundation gained through in-depth knowledge of technology, standards, tools and processes. Person has the ability to organize and draw connections among ideas and distinguish between those which are implementable. Person demonstrates a degree of flexibility in resolving problems/ issues that atleast to in-depth command of all techniques, processes, tools and standards within the relevant field of specialisation. Roles & Responsibilities Responsible for requirement analysis and feasibility study including system level work estimation while considering risk identification and mitigation. Responsible for design, coding, testing, bug fixing, documentation and technical support in the assigned area. Responsible for on time delivery while adhering to quality and productivity goals. Responsible for traceability of the requirements from design to delivery Code optimization and coverage. Responsible for conducting reviews, identifying risks and ownership of quality of deliverables. Responsible for identifying training needs of the team. Expected to enhance technical capabilities by attending trainings, self-study and periodic technical assessments. Expected to participate in technical initiatives related to project and organization and deliver training as per plan and quality. Expected to be a technical mentor for junior members. Person may be given additional responsibility of managing people based on discretion of Project Manager. Education and Experience Required Engineering graduate, MCA, etc Experience: 5-8 years Competencies Description Data Science TCB is applicable to one who 1) Analyses data to arrive at patterns/Insights/models 2) Come up with models based on the data to provide recommendations, predictive analytics etc 3) Provides implementation of the models in R, Matlab etc 4) Can understand and apply machine learning/AI techniques Platforms- Unix Tools- R, Matlab, Spark Machine Learning, Python-ML, SPSS, SAS Languages- R, Perl, Python, Scala Specialization- COGNITIVE ANALYTICS INCLUDING COMPUTER VISION, AI and ML, STATISTICS
Posted 1 week ago
5.0 - 8.0 years
7 - 10 Lacs
Bengaluru
Work from Office
Job Summary Person at this position takes ownership of a module and associated quality and delivery. Person at this position provides instructions, guidance and advice to team members to ensure quality and on time delivery. Person at this position is expected to be able to instruct and review the quality of work done by technical staff. Person at this position should be able to identify key issues and challenges by themselves, prioritize the tasks and deliver results with minimal direction and supervision. Person at this position has the ability to investigate the root cause of the problem and come up alternatives/ solutions based on sound technical foundation gained through in-depth knowledge of technology, standards, tools and processes. Person has the ability to organize and draw connections among ideas and distinguish between those which are implementable. Person demonstrates a degree of flexibility in resolving problems/ issues that atleast to in-depth command of all techniques, processes, tools and standards within the relevant field of specialisation. Roles & Responsibilities Responsible for requirement analysis and feasibility study including system level work estimation while considering risk identification and mitigation. Responsible for design, coding, testing, bug fixing, documentation and technical support in the assigned area. Responsible for on time delivery while adhering to quality and productivity goals. Responsible for traceability of the requirements from design to delivery Code optimization and coverage. Responsible for conducting reviews, identifying risks and ownership of quality of deliverables. Responsible for identifying training needs of the team. Expected to enhance technical capabilities by attending trainings, self-study and periodic technical assessments. Expected to participate in technical initiatives related to project and organization and deliver training as per plan and quality. Expected to be a technical mentor for junior members. Person may be given additional responsibility of managing people based on discretion of Project Manager. Education and Experience Required Engineering graduate, MCA, etc Experience: 5-8 years Competencies Description Application Protocol & Engines - Linux engineer is one: who has done one or more of the following on Embedded Linux design, development/customization, bug fixing/sustenance who has experience in one or more of the following domains Multimedia Telephony Connectivity Sensor Security Platforms- Mandatory to have worked on one or more of the following: Embedded Linux Tools- Mandatory to have worked on one or more of the following; gdb/ddd; linux editors; top; ps; meminfo Languages- Mandatory to have worked on one or more of the following; C; C++ Specialization- MULTIMEDIA, CONNECTIVITY, TELEPHONY, CARRIER GRADE PLATFORM, GENERIC FRAMEWORK
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Position: Senior Principal Data Engineer Experience: Must have 10+ years of experience About Role: We are looking for experienced Data engineers with excellent problem-solving skills to develop machine-learning powered Data Products design to enhance customer experiences. About us: Nurtured from the seed of a single great idea - to empower the traveler - MakeMyTrip went on to pioneer India’s online travel industry Founded in the year 2000 by Deep Kalra, MakeMyTrip has since transformed how India travels. One of our most memorable moments has been to ring the bell at NASDAQ in 2010. Post-merger with the Ibibo group in 2017, we created a stronger identity and traction for our portfolio of brands, increasing the pace of product and technology innovations. Ranked amongst the LinkedIn Top 25 companies 2018. GO-MMT is the corporate entity of three giants in the Online Travel Industry—Goibibo, MakeMyTrip and RedBus. The GO-MMT family celebrates the compounded strengths of their brands. The group company is easily the most sought after corporate in the online travel industry. About the team: MakeMyTrip as India’s leading online travel company and provides petabytes of raw data which is helpful for business growth, analytical and machine learning needs. Data Platform Team is a horizontal function at MakeMyTrip to support various LOBs (Flights, Hotels, Holidays, Ground) and works heavily on streaming datasets which powers personalized experiences for every customer from recommendations to in-location engagement. There are two key responsibilities of Data Engineering team: One to develop the platform for data capture, storage, processing, serving and querying. Second is to develop data products starting from; o personalization & recommendation platform o customer segmentation & intelligence o data insights engine for persuasions and o the customer engagement platform to help marketers craft contextual and personalized campaigns over multi-channel communications to users We developed Feature Store, an internal unified data analytics platform that helps us to build reliable data pipelines, simplify featurization and accelerate model training. This enabled us to enjoy actionable insights into what customers want, at scale, and to drive richer, personalized online experiences. Technology experience : Extensive experience working with large data sets with hands-on technology skills to design and build robust data architecture Extensive experience in data modeling and database design At least 6+ years of hands-on experience in Spark/BigData Tech stack Stream processing engines – Spark Structured Streaming/Flink Analytical processing on Big Data using Spark At least 6+ years of experience in Scala Hands-on administration, configuration management, monitoring, performance tuning of Spark workloads, Distributed platforms, and JVM based systems At least 2+ years of cloud deployment experience – AWS | Azure | Google Cloud Platform At least 2+ product deployments of big data technologies – Business Data Lake, NoSQL databases etc Awareness and decision making ability to choose among various big data, no sql, and analytics tools and technologies Should have experience in architecting and implementing domain centric big data solutions Ability to frame architectural decisions and provide technology leadership & direction Excellent problem solving, hands-on engineering, and communication skills Show more Show less
Posted 1 week ago
8.0 - 13.0 years
25 - 40 Lacs
Bengaluru
Work from Office
• Proven success as a senior engineering leader, with 10+ years of progressive experience, including 8 years or more leading engineering teams at startups • Deep expertise in modern engineering practices, including cloud infrastructure,.. Required Candidate profile microservices, and scalable system design • Strong technical foundation, with deep knowledge of modern full-stack development practices, architectures, and tools, including JavaScript frameworks (e...
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.
These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.
The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.
In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead
As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.
In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts
Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.
Here are 25 interview questions that you may encounter when applying for Scala roles:
As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.