Home
Jobs

2896 Scala Jobs - Page 30

Filter Interviews
Min: 0 years
Max: 25 years
Min: β‚Ή0
Max: β‚Ή10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India; Gurgaon, Haryana, India; Pune, Maharashtra, India . Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field, or equivalent practical experience. 4 years of experience in developing and troubleshooting data processing algorithms. Experience coding with one or more programming languages (e.g., Java, Python) and Bigdata technologies such as Scala, Spark and hadoop frameworks. Experience with one public cloud provider, such as GCP. Preferred qualifications: Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments. Experience in Big Data, information retrieval, data mining, or Machine Learning. Experience with data warehouses, technical architectures, infrastructure components, Extract Transform and Load/Extract, Load and Transform and reporting/analytic tools, environments, and data structures. Experience in building multi-tier applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow. Experience with Infrastructure as Code and Continuous Integration/Continuous Deployment tools like Terraform, Ansible, Jenkins. Understanding one database type, with the ability to write complex SQL queries. About The Job The Google Cloud Platform team helps customers transform and build what's next for their business β€” all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers β€” developers, small and large businesses, educational institutions and government agencies β€” see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Strategic Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges. You will have an understanding of data governance and security controls. You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work with Product Management and Product Engineering teams to build and constantly drive excellence in our products. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate complex customer requirements into recommendations for appropriate solution architectures and advisory services. Engage with technical leads, and partners to lead high velocity migration and modernisation to Google Cloud Platform (GCP). Design, Migrate/Build and Operationalise data storage and processing infrastructure using Cloud native products. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Take various project requirements and organize them into clear goals and objectives, and create a work breakdown structure to manage internal and external stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India; Gurgaon, Haryana, India; Pune, Maharashtra, India . Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field, or equivalent practical experience. 4 years of experience in developing and troubleshooting data processing algorithms. Experience coding with one or more programming languages (e.g., Java, Python) and Bigdata technologies such as Scala, Spark and hadoop frameworks. Experience with one public cloud provider, such as GCP. Preferred qualifications: Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments. Experience in Big Data, information retrieval, data mining, or Machine Learning. Experience with data warehouses, technical architectures, infrastructure components, Extract Transform and Load/Extract, Load and Transform and reporting/analytic tools, environments, and data structures. Experience in building multi-tier applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow. Experience with Infrastructure as Code and Continuous Integration/Continuous Deployment tools like Terraform, Ansible, Jenkins. Understanding one database type, with the ability to write complex SQL queries. About The Job The Google Cloud Platform team helps customers transform and build what's next for their business β€” all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers β€” developers, small and large businesses, educational institutions and government agencies β€” see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Strategic Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges. You will have an understanding of data governance and security controls. You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work with Product Management and Product Engineering teams to build and constantly drive excellence in our products. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate complex customer requirements into recommendations for appropriate solution architectures and advisory services. Engage with technical leads, and partners to lead high velocity migration and modernisation to Google Cloud Platform (GCP). Design, Migrate/Build and Operationalise data storage and processing infrastructure using Cloud native products. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Take various project requirements and organize them into clear goals and objectives, and create a work breakdown structure to manage internal and external stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Uttar Pradesh, India

On-site

Linkedin logo

Job Description Be part of the solution at Technip Energies and embark on a one-of-a-kind journey. You will be helping to develop cutting-edge solutions to solve real-world energy problems. We are currently seeking an Azure Data Architect , to join our Digi team based in Noida . About us: Technip Energies is a global technology and engineering powerhouse. With leadership positions in LNG, hydrogen, ethylene, sustainable chemistry, and CO2 management, we are contributing to the development of critical markets such as energy, energy derivatives, decarbonization, and circularity. Our complementary business segments, Technology, Products and Services (TPS) and Project Delivery, turn innovation into scalable and industrial reality. Through collaboration and excellence in execution, our 17,000+ employees across 34 countries are fully committed to bridging prosperity with sustainability for a world designed to last. About the opportunity we offer: Develop RESTful APIs using Azure APIM Develop integration workflow using LogicApp, synpase and service bus. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation using Azure Data Factory and synapse pipelines. Collaborate closely with Product Owners to understand data pipeline requirements and design effective data workflows. Translate business requirements into technical specifications for data pipelines. Create and maintain data storage solutions using Azure Cosmos DB and Azure Data Lake Storage Design and implement data models to optimize data storage and retrieval. Ensure data security and compliance with data governance policies. Analyze data pipeline performance metrics to identify bottlenecks and areas for improvement. Monitor data pipelines to ensure data consistency, availability, and adherence to service-level agreements. Integrate data pipelines with Azure DevOps to automate data pipeline deployment and testing processes. Leverage Azure DevOps tools for continuous integration and continuous delivery (CI/CD) of data pipelines Work effectively in an Agile development environment Collaborate with cross-functional teams to deliver value in an Agile manner. About you: 5 years' work experience (minimum 3 years’ Experience in Microsoft Azure) (Azure Administrator, Data Platform, Data Lake, Synapse Pipelines, Synapse Analytics, API Management and other data cloud architecture) Development environments: Git, Azure DevOps, Template ARM Languages: C#, .NET, Python Strong analytical problem solver with an organized approach Fluent English Excellent methodology (communication, documentation, collaborative approach) Act independently and as a top-level contributor in resolving project strategy, scope, and direction Excellent organizational skills and a proven ability to get results Data mindset Nice to have: Microsoft Azure certifications Scala, JAVA Data-related projects: 5 years minimum Your career with us: Working at Technip Energies is an inspiring journey, filled with groundbreaking projects and dynamic collaborations. Surrounded by diverse and talented individuals, you will feel welcomed, respected, and engaged. Enjoy a safe, caring environment where you can spark new ideas, reimagine the future, and lead change. As your career grows, you will benefit from learning opportunities at T.EN University, such as The Future Ready Program, and from the support of your manager through check-in moments like the Mid-Year Development Review, fostering continuous growth and development What’s next? Once receiving your application, our Talent Acquisition professionals will screen and match your profile against the role requirements. We ask for your patience as the team completes the volume of applications with reasonable timeframe. Check your application progress periodically via personal account from created candidate profile during your application. We invite you to get to know more about our company by visiting and follow us on LinkedIn, Instagram, Facebook, X and YouTube for company updates. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team The primary responsibility of the Content Management team is to develop and manage the Content Management System (CMS). This system processes all content showcased on the Roku Channel, including creating ingestion pipelines, collaborating with partners for content acquisition, processing metadata, and managing content selection. The team also ensures that all Roku personnel can seamlessly update metadata. The Content Management team collaborates closely with the Recommendation team to enhance content curation and personalized recommendations. The system is designed to be highly scalable, leveraging distributed architectures and machine learning algorithms. The team aims to build a next-generation platform by revamping, redesigning, and expanding existing systems. This initiative addresses scalability, and latency constraints, and accommodates a growing number of content providers and partners. About the role Roku pioneered TV streaming and continues to innovate and lead the industry. The Roku Channel has us well-positioned to help shape the future of streaming. Continued success relies on investing in the Roku Cloud TV Platform, so we deliver high quality streaming TV experience at a global scale. You will be part of the Roku Content Management System and Tools Engineering team, playing a key role in developing the next generation content management systems that drive content ingestion, selection, management, and curation workflows. These systems are vital for empowering critical functions like Search and Recommendation on the Roku Platform. Your projects will have a direct impact on millions of Roku users globally. Throughout, you'll collaborate with key stakeholders across various Roku engineering teams and take the lead in designing our content management system. The ideal candidate will have endless curiosity and can pair a global mindset with locally relevant execution. You should be a gritty problem solver and self-starter who can drive programs with the product and commercial teams within Roku and across external strategic partner organizations. The successful candidate will display a balance of hard and soft skills, including the ability to respond quickly to changing business needs. This is an excellent role for a senior professional who enjoys a high level of visibility, thrives on having a critical business impact, able to make critical decisions and is excited to work on a core content pipeline component which is crucial for many streaming components at Roku. What you’ll be doing Design and implement highly scalable, and reliable web scale applications, tools and automation frameworks that power the Roku Content Management System Work closely with product management team, content management services, and other internal product engineering teams to contribute towards evolving the Roku Content Management Systems and Tools Design and build data pipelines for batch, near-real-time, and real-time processing Translate functional specifications into logical, component-based technical designs Write and review code, evaluate architectural tradeoffs for performance and security for high performance Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects Manage individual project priorities, deadlines and deliverables with limited supervision We’re excited if you have Strong problem solving and analytical abilities 5+ years of professional experience as Software Engineer Proficiency in Java/Scala/Python Strong technical competency and experience in building high-performance and cloud based scalable micro-services. Experience with Microservice and event-driven architectures Experience with design and implementation of modern micro-services architectures and API frameworks (REST/JSON). Experience with cloud platforms: AWS (preferred), GCP, etc. Experience with NoSQL data storage technologies such as Cassandra, DynamoDB, Redis, etc. as well as RDMBS like Oracle or MySQL. Ability to handle periodic on-call duty as well as out-of-band requests; strong written and verbal communication skills Bachelor's Degree in Computer Science plus 5 years of experience or equivalent; Master's degree preferred. Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms. Show more Show less

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Acceldata is reimagining the way companies observe their Data! Acceldata is the pioneer and leader in data observability, revolutionizing how enterprises manage and observe data by offering comprehensive insights into various key aspects of data, data pipelines and data infrastructure across various environments. Our platform empowers data teams to manage products effectively by ensuring data quality, preventing failures, and controlling costs. As a Software Engineer You’ll be responsible for building, scaling and maintaining the core features and capabilities of our Data Observability suite. You’ll work closely with the frontend and product teams to create a reliable and scalable platform. A day in the life of Software Engineer Designing, building and improving the capabilities of some of the key features of the product. Investigating software-related complaints and making necessary adjustments to ensure optimal software performance. Diving deep into open-source data engines and work on optimising their performance. Designing, building and maintaining low-latency APIs Developing services that will be consumed by the frontend and solution engineers Regularly attending team meetings to discuss projects, brainstorm ideas, and put forward solutions to any issues. You are a great fit for this role if you have 2-4 years of experience in JVM (Java, Scala, Kotlin) Strong data structure, algorithm design, and problem-solving skills. Experience with database systems, RDBMS, Mongo, and Elastic Search. Experience in cloud technologies Knowledge of distributed systems, experience in enterprise software and SQL. Bonus Points for Experience in Spark / Kafka. Experience in Big Data Systems or Hadoop components. We care for our team Mentorship & Growth ESOPs Medical and Life Insurance Paid Maternity & Parental Leave Corporate Uber Program Learning & Development Support Acceldata for All We are a fast-growing company, solving complex data problems at scale. We are driven by strong work ethics, high standards of excellence, and a spirit of collaboration. We promote innovation, commitment, and accountability. Our goal is to cultivate a healthy work environment that fosters a sense of belonging, encourages teamwork, and brings out the best in every individual. Why Acceldata? Acceldata is redefining data observability for enterprise data systems. Founded by experts who recognized the need for innovative monitoring and management solutions in a cloud-first, AI-driven environment, our platform empowers data teams to effectively manage data products. We address common challenges such as scaling and performance issues, cost overruns, and data quality problems by providing operational visibility, proactive alerts, and monitoring reliability across the various environments. Delivered as a SaaS product, Acceldata's solutions have been embraced by global customers, such as HPE, HSBC, Visa, Freddie Mac, Manulife, Workday, Zoominfo, GSK, Oracle, PubMatic, PhonePe (Walmart), Hersheys, Dun & Bradstreet, and many more. Acceldata is a Series-C funded company and its investors include Insight Partners, March Capital, Lightspeed, Sorenson Ventures, Industry Ventures, and Emergent Ventures. Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

13 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Job Overview: As a Scala Developer in our team, you work with large scale manufacturing data coming from our globally distributed plants. You will focus on building efficient, scalable data-driven applications that - among other use cases - connect IoT devices, pre-process, standardize or enrich data, feed ML models or generate alerts for shopfloor operators. The data sets produced by these applications - whether data streams or data at rest - need to be highly available, reliable, consistent and quality-assured so that they can serve as input to wide range of other use cases and downstream applications. We run these applications on a Kubernetes based, edge data platform in our plants. The platform is currently in ramp-up phase, so apart from building applications, you will also contribute to scaling the platform including topics such as automation and observability. Finally, you are expected to interact with customers and other technical teams e. g. for requirements clarification definition of data models.

Posted 1 week ago

Apply

2.0 - 11.0 years

16 - 18 Lacs

Pune

Work from Office

Naukri logo

Some careers shine brighter than others. If you re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software engineer In this role, you will: Ensure data quality, data governance, and compliance with regulatory requirements. Monitor and optimize data pipeline performance. Troubleshoot and resolve data-related issues promptly. Implement monitoring and alerting systems for data processes. Troubleshoot and resolve technical issues optimizing system performance ensuring reliability. Create and maintain technical documentation for new and existing system ensuring that information is accessible to the team. Implementing and monitoring solutions that identify both system bottlenecks and production issues. Requirements To be successful in this role, you should meet the following requirements: Good communication skills as the candidate need to work with globally dispersed and diversified teams. Flexible attitude - open to learn new technologies based on project requirements. Proficiency in Python/Scala/Bash for data pipeline development and automation Familiarity with CI/CD pipelines for deploying and managing data pipeline. Proven experience building and maintaining scalable data movement pipelines, Good understanding of Hadoop and GCP environments for data storage and data processing Familiarity with ETL tools and distributed data processing frameworks such as Spark Good understanding of scheduling and orchestration tools such as AirFlow or Control-M Good understanding of data principles, data integrity, data best practices etc

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Key Responsibilities: Test Strategy & Planning: Develop and implement robust test strategies, detailed test plans, and comprehensive test cases for ETL processes, data migrations, data warehouse solutions, and data lake implementations. Ab Initio ETL Testing: Execute functional, integration, regression, and performance tests for ETL jobs developed using Ab Initio Graphical Development Environment (GDE), Co>Operating System, and plans deployed via Control Center. Validate data transformations, aggregations, and data quality rules implemented within Ab Initio graphs. Spark Data Pipeline Testing: Perform hands-on testing of data pipelines and transformations built using Apache Spark (PySpark/Scala Spark) for large-scale data processing in batch and potentially streaming modes. Verify data correctness, consistency, and performance of Spark jobs from source to target. Advanced Data Validation & Reconciliation: Perform extensive data validation and reconciliation activities between source, staging, and target systems using complex SQL queries. Conduct row counts, sum checks, data type validations, primary key/foreign key integrity checks, and business rule validations. Data Quality Assurance: Identify, analyze, document, and track data quality issues, anomalies, and discrepancies across the data landscape. Collaborate closely with ETL/Spark developers, data architects, and business analysts to understand data quality requirements, identify root causes, and ensure timely resolution of defects. Documentation & Reporting: Create and maintain detailed test documentation, including test cases, test results, defect reports, and data quality metrics dashboards. Provide clear and concise communication on test progress, defect status, and overall data quality posture to stakeholders. Required Skills & Qualifications: Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. 3+ years of dedicated experience in ETL/Data Warehouse testing. Strong hands-on experience testing ETL processes developed using Ab Initio (GDE, Co>Operating System). Hands-on experience in testing data pipelines built with Apache Spark (PySpark or Scala Spark). Advanced SQL skills for data querying, validation, complex joins, and comparison across heterogeneous databases (e.g., Oracle, DB2, SQL Server, Hive, etc.). Solid understanding of ETL methodologies, data warehousing concepts (Star Schema, Snowflake Schema), and data modeling principles. Experience with test management and defect tracking tools (e.g., JIRA, Azure DevOps, HP ALM). Excellent analytical, problem-solving, and communication skills, with a keen eye for detail. Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer I/II Job Location : Pune, Maharashtra, India Job summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Desired Profile Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Summary: We are seeking an experienced Data Engineer with a strong background in Scala development, advanced SQL, and big data technologies, particularly Apache Spark. The candidate will be responsible for designing, building, optimizing, and maintaining highly scalable and reliable data pipelines and data infrastructure. Key Responsibilities: Data Pipeline Development: Design, develop, test, and deploy robust, high-performance, and scalable ETL/ELT data pipelines using Scala and Apache Spark to ingest, process, and transform large volumes of structured and unstructured data from diverse sources. Big Data Expertise: Leverage expertise in the Hadoop ecosystem (HDFS, Hive, etc.) and distributed computing principles to build efficient and fault-tolerant data solutions. Advanced SQL: Write complex, optimized SQL queries and stored procedures. Performance Optimization: Continuously monitor, analyze, and optimize the performance of data pipelines and data stores. Troubleshoot complex data-related issues, identify bottlenecks, and implement solutions for improved efficiency and reliability. Data Quality & Governance: Implement data quality checks, validation rules, and reconciliation processes to ensure the accuracy, completeness, and consistency of data. Contribute to data governance and security best practices. Automation & CI/CD: Implement automation for data pipeline deployment, monitoring, and alerting using tools like Apache Airflow, Jenkins, or similar CI/CD platforms. Documentation: Create and maintain comprehensive technical documentation for data architectures, pipelines, and processes. Required Skills & Qualifications: Bachelor's or master's degree in computer science, Engineering, or a related quantitative field. Minimum 5 years of professional experience in Data Engineering, with a strong focus on big data technologies. Proficiency in Scala for developing big data applications and transformations, especially with Apache Spark. Expert-level proficiency in SQL ; ability to write complex queries, optimize performance, and understand database internals. Extensive hands-on experience with Apache Spark (Spark SQL, Data Frames, RDDs) for large-scale data processing and analytics. Solid understanding of distributed computing concepts and experience with the Hadoop ecosystem (HDFS, Hive). Experience with building and optimizing ETL/ELT processes and data warehousing concepts. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your Primary Responsibilities Include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Preferred Education Master's Degree Required Technical And Professional Expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred Technical And Professional Experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

Role & responsibilities Experience: 5 - 8 years Employment Type: Full-Time Job Summary: We are looking for a highly skilled Scala and Spark Developer to join our data engineering team. The ideal candidate will have strong experience in building scalable data processing solutions using Apache Spark and writing robust, high-performance applications in Scala. You will work closely with data scientists, data analysts, and product teams to design, develop, and optimize large-scale data pipelines and ETL workflows. Key Responsibilities: Develop and maintain scalable data processing pipelines using Apache Spark and Scala. Work on batch and real-time data processing using Spark (RDD/DataFrame/Dataset). Write efficient and maintainable code following best practices and coding standards. Collaborate with cross-functional teams to understand data requirements and implement solutions. Optimize performance of Spark jobs and troubleshoot data-related issues. Integrate data from multiple sources and ensure data quality and consistency. Participate in design reviews, code reviews, and provide technical leadership when needed. Contribute to data modeling, schema design, and architecture discussions. Required Skills: Strong programming skills in Scala . Expertise in Apache Spark (Core, SQL, Streaming). Hands-on experience with distributed computing and large-scale data processing. Experience with data formats like Parquet, Avro, ORC, and JSON. Good understanding of functional programming concepts. Familiarity with data ingestion tools (Kafka, Flume, Sqoop, etc.). Experience working with Hadoop ecosystem (HDFS, Hive, YARN, etc.) is a plus. Strong SQL skills and experience working with relational and NoSQL databases. Experience with version control tools like Git. Preferred Qualifications: Bachelor's or Masters degree in Computer Science, Engineering, or related field. Experience with cloud platforms like AWS, Azure, or GCP (especially EMR, Databricks, etc.). Knowledge of containerization (Docker, Kubernetes) is a plus. Familiarity with CI/CD tools and DevOps practices. ndidate profile

Posted 1 week ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Description: Data Engineer – Early Careers / Trainee Location: India – Gurgaon Immediate joiners only Department: Public Cloud – Offerings and Delivery – Cloud Data Services / Hybrid Job Summary: We are looking for a motivated Fresher/Trainee Data Engineer to join our Cloud Data Services team. As a trainee, you will learn and contribute to the design, development, and maintenance of data pipelines that enable analytics and business decision-making in cloud and hybrid environments. This role is ideal for recent graduates or entry-level candidates passionate about data and cloud technologies. Key Responsibilities: Assist in developing and maintaining scalable and efficient data pipelines under the guidance of senior engineers Support data extraction, transformation, and loading (ETL/ELT) processes Learn and apply data quality, governance, and validation practices Participate in developing data models for structured and semi-structured data Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders Follow best practices in version control (Git) and Data pipelines /DevOps/MLOps principles Document data workflows, pipelines, and learnings for future reference Stay updated with new data engineering tools and technologies Education & Qualifications: Bachelor’s Degree (or final year) in Computer Science, Information Technology, Data Engineering, or related fields Coursework or academic projects in Databases, Data Warehousing, Data Structures, Python/Java/Scala, and SQL Familiarity with Cloud Platforms (AWS, Azure, or Google Cloud) is a plus Knowledge of ETL processes, Data Modeling concepts, or Big Data technologies is desirable (Hadoop, Spark) Technical Skills (Good to Have / Will Learn On-the-Job): Basic knowledge of Python or SQL programming Exposure to data integration tools or scripting Understanding of relational and NoSQL databases Familiarity with data visualization tools like Power BI or Tableau (optional) Interest in Cloud Technologies (AWS S3, Azure Data Lake, GCP BigQuery) Soft Skills: Strong analytical and problem-solving mindset Eagerness to learn new technologies and take on challenges Good written and verbal communication Ability to work both independently and within a team environment Attention to detail and time management What You Will Gain: Hands-on experience with cloud-based data platforms Exposure to real-world data engineering projects Training in ETL pipelines, data modeling, and cloud data services Opportunity to transition to full-time Data Engineer role based on performance Show more Show less

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Bengaluru

Hybrid

Naukri logo

Job Title / Primary Skill: Big Data Developer (Lead/Associate Manager) Management Level: G150 Years of Experience: 8 to 13 years Job Location: Bangalore (Hybrid) Must Have Skills: Big data, Spark, Scala, SQL, Hadoop Ecosystem. Educational Qualification: BE/BTech/ MTech/ MCA, Bachelor or masters degree in Computer Science, Job Overview Overall Experience 8+ years in IT, Software Engineering or relevant discipline. Designs, develops, implements, and updates software systems in accordance with the needs of the organization. Evaluates, schedules, and resources development projects; investigates user needs; and documents, tests, and maintains computer programs. Job Description: We look for developers to have good knowledge of Scala programming skills and Knowledge of SQL Technical Skills: Scala, Python -> Scala is often used for Hadoop-based projects, while Python and Scala are choices for Apache Spark-based projects. SQL -> Knowledge of SQL (Structured Query Language) is important for querying and manipulating data Shell Script -> Shell scripts are used for batch processing of data, it can be used for scheduling the jobs and shell scripts are often used for deploying applications Spark Scala -> Spark Scala allows you to write Spark applications using the Spark API in Scala Spark SQL -> It allows to work with structured data using SQL-like queries and Data Frame APIs. We can execute SQL queries against Data Frames, enabling easy data exploration, transformation, and analysis. The typical tasks and responsibilities of a Big Data Developer include: 1. Data Ingestion: Collecting and importing data from various sources, such as databases, logs, APIs into the Big Data infrastructure. 2. Data Processing: Designing data pipelines to clean, transform, and prepare raw data for analysis. This often involves using technologies like Apache Hadoop, Apache Spark. 3. Data Storage: Selecting appropriate data storage technologies like Hadoop Distributed File System (HDFS), HIVE, IMPALA, or cloud-based storage solutions (Snowflake, Databricks).

Posted 1 week ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Detailed Job Description For Solution Architect At PAN India Architectural Assessment Road mapping Conduct a comprehensive assessment of the current R&D Data Lake architecture. Propose and design the architecture for the next-generation self-service R&D Data Lake based on defined product specifications. Contribute to defining a detailed architectural roadmap that incorporates the latest enterprise patterns and strategic recommendations for the engineering team. Data Ingestion & Processing Enhancements Design and prototype updated data ingestion mechanisms that meet GxP validation requirements and improve data flow efficiency. Architect advanced data and metadata processing techniques to enhance data quality and accessibility Storage Patterns Optimization Evaluate optimized storage patterns to ensure scalability, performance, and cost-effectiveness. Design updated storage solutions aligned with technical roadmap objectives and compliance standards. Data Handling & Governance Define and document standardized data handling procedures that adhere to GxP and data governance policies. Collaborate with governance teams to ensure procedures align with regulatory standards and best practices. Assess current security measures and implement robust access controls to protect sensitive R&D data. Ensure that all security enhancements adhere to enterprise security frameworks and regulatory requirements. Design and implement comprehensive data cataloguing procedures to improve data discoverability and usability. Integrate cataloguing processes with existing data governance frameworks to maintain continuity and compliance. Recommend and oversee the implementation of new tools and technologies related to ingestion, storage, processing, handling, security, and cataloguing. Design and plan to ensure seamless integration and minimal disruption during technology updates. Collaborate on the ongoing maintenance and provide technical support for legacy data ingestion pipelines throughout the uplift project. Ensure legacy systems remain stable, reliable, and efficient during the transition period Work closely with the R&D IT team, data governance groups, and other stakeholders for coordinated and effective implementation of architectural updates. Collaborate in the knowledge transfer sessions to equip internal teams to manage and maintain the new architecture post-project. Required Skills Bachelor’s degree in Computer Science, Information Technology, or a related field with equivalent hands-on experience. Minimum 10 years of experience in solution architecture, with a strong background in data architecture and enterprise data management Strong understanding of cloud-native platforms, with a preference for AWS. Knowledgeable in distributed data architectures, including services like S3, Glue, and Lake Formation. Proven experience in programming languages and tools relevant to data engineering (e.g., Python, Scala). Experienced with Big Data technologies like: Hadoop, Cassandra, Spark, Hive, and Kafka. Skilled in using querying tools such as Redshift, Spark SQL, Hive, and Presto. Demonstrated experience in data modeling, data pipelines development and data warehousing. Infrastructure And Deployment Familiar with Infrastructure-as-Code tools, including Terraform and CloudFormation. Experienced in building systems around the CI/CD concept. Hands-on experience with AWS services and other cloud platforms. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Himachal Pradesh

On-site

GlassDoor logo

KULLU,Himāchal Pradesh,India Vollzeit Unbefristet 48 eCommerce Solutions Blue Dart Express Limited Territory Service Representative - Manali Qualification : Minimum 12th pass preferably Graduate. Experience : 3-4 years of experience from Service Centre background. Age :25 to 28 years Knowledge: Should know local geography. Skills: Should possess a two-wheeler Should possess a valid DL (Driving License) High organizational commitment Good team worker Preferably knows basic Computers Good communication in Local/Hindi/English languages. Erhalte maßgeschneiderte Job-Empfehlungen basierend auf deinen Interessen. Starten Arbeitssuchende sahen auch DevOps ETL Engineer Standort INDORE, Madhya Pradesh, India Your IT Future, Delivered. DevOps ETL Engineer. With a global team of 5600+ IT professionals, DHL IT Services connects people and keeps the global economy running by continuously innovating and cre... DevOps Support Engineer (Containerization, Linux, Java) Standort Cyberjaya, Selangor, Malaysia Your IT Future, Delivered. DevOps Support Engineer (Containerization, Linux, Java). With a global team of 6000+ IT professionals, DHL IT Services connects people and keeps the global economy runnin... Senior Business Development Executive - GPD Standort GUMMIDIPOONDI TALUK, Tamil Nādu, India Job Title. Business Development Executive/Sr Business Development Executive. Function. Sales. Reporting to. Head – Branch Sales (Metro). 1. Purpose. Drive the Area revenues through effective sales ... Senior Scala Developer (Scala 3 & ZIO Libraries) Standort Cyberjaya, Selangor, Malaysia

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Req ID: 327890 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Python Developer - Digital Engineering Sr. Engineer to join our team in Hyderabad, Telangana (IN-TG), India (IN). PYTHON Data Engineer Exposure to retrieval-augmented generation (RAG) systems and vector databases. Strong programming skills in Python (and optionally Scala or Java). Hands-on experience with data storage solutions (e.g., Delta Lake, Parquet, S3, BigQuery) Experience with data preparation for transformer-based models or LLMs Expertise in working with large-scale data frameworks (e.g., Spark, Kafka, Dask) Familiarity with MLOps tools (e.g., MLflow, Weights & Biases, SageMaker Pipelines) About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage and passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systemsβ€”the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What You’ll Do Collaborate with client facing teams to understand solution context and contribute in technical requirement gathering and analysis Design and implement technical features leveraging best practices for technology stack being used Work with technical architects on the team to validate design and implementation approach Write production-ready code that is easily testable, understood by other developers and accounts for edge cases and errors Ensure highest quality of deliverables by following architecture/design guidelines, coding best practices, periodic design/code reviews Write unit tests as well as higher level tests to handle expected edge cases and errors gracefully, as well as happy paths Uses bug tracking, code review, version control and other tools to organize and deliver work Participate in scrum calls and agile ceremonies, and effectively communicate work progress, issues and dependencies Consistently contribute in researching & evaluating latest technologies through rapid learning, conducting proof-of-concepts and creating prototype solutions What You’ll Bring 2+ years of relevant hands-on experience CS foundation is must Strong command over distributed computing framework like Spark (preferred) or others. Strong analytical / problems solving Ability to quickly learn and become hands on new technology and be innovative in creating solutions Strong in at least one of the Programming languages - Python or Java, Scala, etc. and Programming basics - Data Structures Hands on experience in building modules for data management solutions such as data pipeline, orchestration, ingestion patterns (batch, real time) Experience in designing and implementation of solution on distributed computing and cloud services platform (but not limited to) - AWS, Azure, GCP Good understanding of RDBMS, with some exp on ETL is preferred Additional Skills: Understanding of DevOps, CI / CD, data security, experience in designing on cloud platform AWS Solutions Architect certification with understanding of broader AWS stack Knowledge of data modeling and data warehouse concepts Willingness to travel to other global offices as needed to work with client or other internal project teams Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com Show more Show less

Posted 1 week ago

Apply

10.0 years

4 - 7 Lacs

Hyderābād

On-site

GlassDoor logo

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Lead Consultant Specialist. In this role, you will: The role requires a strong leader to oversee the solution design and implementation of change whilst at the same time ensuring production is resilient and performant. This includes interaction with the bank’s architects and other systems and technical teams, end users and stakeholders. The person is expected to oversee and guide the day-to-day activities of the technical team, with the help of his more experienced colleagues, ensure the team follows good practice etc. In addition, this person will be able to suggest and plan best technical solutions, undertake problem solving, etc balancing pragmatism vs long tern best practice. Thus role includes opportunities for hands on development and analysis and not just team management. Role covers a mix of change and run responsibilities. Current team size is around 45 people located in UK, India, China, Poland and Mexico but mostly India. Should be flexible in working hours, ready to work in shift and On call. Requirements To be successful in this role, you should meet the following requirements: Background in hands-on technical development, with at least 10+ years of industry experience in a data engineering or engineering equivalent & managed a team developers Strong emotional intelligence, able to work professionally under pressure. Have the gravitas to represent the platform in senior meetings Strong communication skills, with the ability to convey technical detail in a non technical language Needs to be a practitioner and proponent of Agile and Dev Ops Proficiency in Hadoop, Spark, Scala, Python, or a programming language associated with data engineering. Expertise building and deploying production level data processing batch systems maintained by application support teams. Experience with a variety of modern development tooling (e.g. Git, Gradle, Nexus) and technologies supporting automation and DevOps (e.g. Jenkins, Docker) Experience working in an Agile environment A strong technical communication ability with demonstrable experience of working in rapidly changing client environments. Knowledge of testing libraries of common programming languages (such as ScalaTest or equivalent). Knows the difference between different test types (unit test, integration test) and can cite specific examples of what they have written themselves. Good understanding of CDP 7.1, HDFS filesystems, Unix, Unix Shell Scripting, Elasticsearch Experience in understanding and analyzing complex business requirements and carry out the system design accordingly. Quantexa Data Engineering Certification (Preferred) Experience in managed services AWS EKS, Azure EKS (Preferred) Experience in Angular (Preferred) Microservices (OCP, Kubernetes) (Preferred) You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI

Posted 1 week ago

Apply

1.0 - 3.0 years

2 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. Basic Qualifications: Masters degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies Proficiency in workflow orchestration, performance tuning on big data processing Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices Preferred Qualifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 1 week ago

Apply

2.0 years

7 - 10 Lacs

Hyderābād

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Join the EDG team as a Full Stack Software Engineer. The EDG team is responsible for Improve consumer experience by implementing an enterprise device gateway to manage device health signal acquisition, centralize consumer consent, facilitate efficient health signal distribution, and empower UHC with connected insights across the health and wellness ecosystem. The team has a strong and integrated relationship with the product team based on strong collaboration, trust, and partnership. Goals for the team are focused on creating meaningful positive impact for our customers through clear and measurable metrics analysis. Primary Responsibilities: Write high-quality, fault tolerant code; normally 70% Backend and 30% Front-end (though the exact ratio will depend on your interest) Build high-scale systems, libraries, frameworks and create test plans Monitor production systems and provide on-call support Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: BS in Computer Science, Engineering or a related technical role or equivalent experience 2+ years experience with JS libraries and frameworks, such as Angular, React or other 2+ years experience in Scala, Java, or other compiled language Preferred Qualifications: Experience with web design Experience using RESTful APIs and asynchronous JS Experience in design and development Testing experience with Scala or Java Database and caching experience, SQL and NoSQL (Postgres, Elasticsearch, or MongoDB) Proven interest in learning Scala At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes β€” an enterprise priority reflected in our mission. #Nic

Posted 1 week ago

Apply

0 years

0 Lacs

Chandigarh, India

On-site

Linkedin logo

Company Profile Oceaneering is a global provider of engineered services and products, primarily to the offshore energy industry. We develop products and services for use throughout the lifecycle of an offshore oilfield, from drilling to decommissioning. We operate the world's premier fleet of work class ROVs. Additionally, we are a leader in offshore oilfield maintenance services, umbilicals, subsea hardware, and tooling. We also use applied technology expertise to serve the defense, entertainment, material handling, aerospace, science, and renewable energy industries. Since year 2003, Oceaneering’s India Center has been an integral part of operations for Oceaneering’s robust product and service offerings across the globe. This center caters to diverse business needs, from oil and gas field infrastructure, subsea robotics to automated material handling & logistics. Our multidisciplinary team offers a wide spectrum of solutions, encompassing Subsea Engineering, Robotics, Automation, Control Systems, Software Development, Asset Integrity Management, Inspection, ROV operations, Field Network Management, Graphics Design & Animation, and more. In addition to these technical functions, Oceaneering India Center plays host to several crucial business functions, including Finance, Supply Chain Management (SCM), Information Technology (IT), Human Resources (HR), and Health, Safety & Environment (HSE). Our world class infrastructure in India includes modern offices, industry-leading tools and software, equipped labs, and beautiful campuses aligned with the future way of work. Oceaneering in India as well as globally has a great work culture that is flexible, transparent, and collaborative with great team synergy. At Oceaneering India Center, we take pride in β€œSolving the Unsolvable” by leveraging the diverse expertise within our team. Join us in shaping the future of technology and engineering solutions on a global scale. Position Summary The Principal Data Scientist will develop Machine Learning and/or Deep Learning based integrated solutions that address customer needs such as inspection topside and subsea. They will also be responsible for development of machine learning algorithms for automation and development of data analytics programs for Oceaneering’s next generation systems. The position requires the Principal Data Scientist to work with various Oceaneering Business units across global time zones but also offers the flexibility to work in a Hybrid Work-office environment. Essential Duties And Responsibilities Lead and supervise a team of moderately experienced engineers on product/prototype design & development assignments or applications. Work both independently and collaboratively to develop custom data models and algorithms to apply on data sets that will be deployed in existing and new products. Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies. Assess the effectiveness and accuracy of new data sources and data gathering techniques. Build data models and organize structured and unstructured data to interpret solutions. Prepares data for predictive and prescriptive modeling. Architect solutions by selection of appropriate technology and components Determines the technical direction and strategy for solving complex, significant, or major issues. Plans and evaluates architectural design and identifies technical risks and associated ways to mitigate those risks. Prepares design proposals to reflect cost, schedule, and technical approaches. Recommends test control, strategies, apparatus, and equipment. Develop, construct, test, and maintain architectures. Lead research activities for ongoing government and commercial projects and products. Collaborate on proposals, grants, and publications in algorithm development. Collect data as warranted to support the algorithm development efforts. Work directly with software engineers to implement algorithms into commercial software products. Work with third parties to utilize off the shelf industrial solutions. Algorithm development on key research areas based on client’s technical problem. This requires constant paper reading, and staying ahead of the game by knowing what is and will be state of the art in this field. Ability to work hands-on in cross-functional teams with a strong sense of self-direction. Non-essential Develop an awareness of programming and design alternatives Cultivate and disseminate knowledge of application development best practices Gather statistics and prepare and write reports on the status of the programming process for discussion with management and/or team members Direct research on emerging application development software products, languages, and standards in support of procurement and development efforts Train, manage and provide guidance to junior staff Perform all other duties as requested, directed or assigned Supervisory Responsibilities This position does not have direct supervisory responsibilities. Re Reporting Relationship Engagement Head Qualifications REQUIRED Bachelor’s degree in Electronics and Electrical Engineering (or related field) with eight or more years of past experience working on Machine Learning and Deep Learning based projects OR Master’s degree in Data Science (or related field) with six or more years of past experience working on Machine Learning and Deep Learning based projects DESIRED Strong knowledge of advanced statistical functions: histograms and distributions, Regression studies, scenario analysis etc. Proficient in Object Oriented Analysis, Design and Programming Strong background in Data Engineering tools like Python/C#, R, Apache Spark, Scala etc. Prior experience in handling large amount of data that includes texts, shapes, sounds, images and/or videos. Knowledge of SaaS Platforms like Microsoft Fabric, Databricks, Snowflake, h2o etc. Background experience of working on cloud platforms like Azure (ML) or AWS (SageMaker), or GCP (Vertex), etc. Proficient in querying SQL and NoSQL databases Hands on experience with various databases like MySQL/PostgreSQL/Oracle, MongoDB, InfluxDB, TimescaleDB, neo4j, Arango, Redis, Cassandra, etc. Prior experience with at least one probabilistic/statistical ambiguity resolution algorithm Proficient in Windows and Linux Operating Systems Basic understanding of ML frameworks like PyTorch and TensorFlow Basic understanding of IoT protocols like Kafka, MQTT or RabbitMQ Prior experience with bigdata platforms like Hadoop, Apache Spark, or Hive is a plus. Knowledge, Skills, Abilities, And Other Characteristics Ability to analyze situations accurately, utilizing a variety of analytical techniques in order to make well informed decisions Ability to effectively prioritize and execute tasks in a high-pressure environment Skill to gather, analyze and interpret data. Ability to determine and meet customer needs Ensures that others involved in a project or effort are kept informed about developments and plans Knowledge of communication styles and techniques Ability to establish and maintain cooperative working relationships Skill to prioritize workflow in a changing work environment Knowledge of applicable data privacy practices and laws Strong analytical and problem-solving skills. Additional Information This position is considered OFFICE WORK which is characterized as follows. Almost exclusively indoors during the day and occasionally at night Occasional exposure to airborne dust in the work place Work surface is stable (flat) The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. This position is considered LIGHT work. OCCASIONAL FREQUENT CONSTANT Lift up to 20 pounds Climbing, stooping, kneeling, squatting, and reaching Lift up to 10 pounds Standing Repetitive movements of arms and hands Sit with back supported Closing Statement In addition, we make a priority of providing learning and development opportunities to enable employees to achieve their potential and take charge of their future. As well as developing employees in a specific role, we are committed to lifelong learning and ongoing education, including developing people skills and identifying future supervisors and managers. Every month, hundreds of employees are provided training, including HSE awareness, apprenticeships, entry and advanced level technical courses, management development seminars, and leadership and supervisory training. We have a strong ethos of internal promotion. We can offer long-term employment and career advancement across countries and continents. Working at Oceaneering means that if you have the ability, drive, and ambition to take charge of your future-you will be supported to do so and the possibilities are endless. Equal Opportunity/Inclusion Oceaneering’s policy is to provide equal employment opportunity to all applicants. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

Description Data Engineer Responsibilities : Deliver end-to-end data and analytics capabilities, including data ingest, data transformation, data science, and data visualization in collaboration with Data and Analytics stakeholder groups Design and deploy databases and data pipelines to support analytics projects Develop scalable and fault-tolerant workflows Clearly document issues, solutions, findings and recommendations to be shared internally & externally Learn and apply tools and technologies proficiently, including: Languages: Python, PySpark, ANSI SQL, Python ML libraries Frameworks/Platform: Spark, Snowflake, Airflow, Hadoop , Kafka Cloud Computing: AWS Tools/Products: PyCharm, Jupyter, Tableau, PowerBI Performance optimization for queries and dashboards Develop and deliver clear, compelling briefings to internal and external stakeholders on findings, recommendations, and solutions Analyze client data & systems to determine whether requirements can be met Test and validate data pipelines, transformations, datasets, reports, and dashboards built by team Develop and communicate solutions architectures and present solutions to both business and technical stakeholders Provide end user support to other data engineers and analysts Candidate Requirements Expert experience in the following[Should have/Good to have]: SQL, Python, PySpark, Python ML libraries. Other programming languages (R, Scala, SAS, Java, etc.) are a plus Data and analytics technologies including SQL/NoSQL/Graph databases, ETL, and BI Knowledge of CI/CD and related tools such as Gitlab, AWS CodeCommit etc. AWS services including EMR, Glue, Athena, Batch, Lambda CloudWatch, DynamoDB, EC2, CloudFormation, IAM and EDS Exposure to Snowflake and Airflow. Solid scripting skills (e.g., bash/shell scripts, Python) Proven work experience in the following: Data streaming technologies Big Data technologies including, Hadoop, Spark, Hive, Teradata, etc. Linux command-line operations Networking knowledge (OSI network layers, TCP/IP, virtualization) Candidate should be able to lead the team, communicate with business, gather and interpret business requirements Experience with agile delivery methodologies using Jira or similar tools Experience working with remote teams AWS Solutions Architect / Developer / Data Analytics Specialty certifications, Professional certification is a plus Bachelor Degree in Computer Science relevant field, Masters Degree is a plus Show more Show less

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

Delhi

On-site

GlassDoor logo

Delhi / Bangalore Engineering / Full Time / Hybrid What is Findem: Findem is the only talent data platform that combines 3D data with AI. It automates and consolidates top-of-funnel activities across your entire talent ecosystem, bringing together sourcing, CRM, and analytics into one place. Only 3D data connects people and company data over time - making an individual’s entire career instantly accessible in a single click, removing the guesswork, and unlocking insights about the market and your competition no one else can. Powered by 3D data, Findem’s automated workflows across the talent lifecycle are the ultimate competitive advantage. Enabling talent teams to deliver continuous pipelines of top, diverse candidates while creating better talent experiences, Findem transforms the way companies plan, hire, and manage talent. Learn more at www.findem.ai Experience - 5 - 9 years We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies. Location- Delhi, India Hybrid- 3 days onsite Responsibilities Build data pipelines, Big data processing solutions and data lake infrastructure using various Big data and ETL technologies Assemble and process large, complex data sets that meet functional non-functional business requirements ETL from a wide variety of sources like MongoDB, S3, Server-to-Server, Kafka etc., and processing using SQL and big data technologies Build analytical tools to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics Build interactive and ad-hoc query self-serve tools for analytics use cases Build data models and data schema for performance, scalability and functional requirement perspective Build processes supporting data transformation, metadata, dependency and workflow management Research, experiment and prototype new tools/technologies and make them successful Skill Requirements Must have-Strong in Python/Scala Must have experience in Big data technologies like Spark, Hadoop, Athena / Presto, Redshift, Kafka etc Experience in various file formats like parquet, JSON, Avro, orc etc Experience in workflow management tools like airflow Experience with batch processing, streaming and message queues Any of visualization tools like Redash, Tableau, Kibana etc Experience in working with structured and unstructured data sets Strong problem solving skills Good to have Exposure to NoSQL like MongoDB Exposure to Cloud platforms like AWS, GCP, etc Exposure to Microservices architecture Exposure to Machine learning techniques The role is full-time and comes with full benefits. We are globally headquartered in the San Francisco Bay Area with our India headquarters in Bengaluru. Equal Opportunity As an equal opportunity employer, we do not discriminate on the basis of race, color, religion, national origin, age, sex (including pregnancy), physical or mental disability, medical condition, genetic information, gender identity or expression, sexual orientation, marital status, protected veteran status or any other legally-protected characteristic.

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Job Summary Person at this position has gained significant work experience to be able to apply their knowledge effectively and deliver results. Person at this position is also able to demonstrate the ability to analyse and interpret complex problems and improve change or adapt existing methods to solve the problem. Person at this position regularly interacts with interfacing groups / customer on technical issue clarification and resolves the issues. Also participates actively in important project/ work related activities and contributes towards identifying important issues and risks. Reaches out for guidance and advice to ensure high quality of deliverables. Person at this position consistently seek opportunities to enhance their existing skills, acquire more complex skills and work towards enhancing their proficiency level in their field of specialisation. Works under limited supervision of Team Lead/ Project Manager. Roles & Responsibilities Responsible for design, coding, testing, bug fixing, documentation and technical support in the assigned area. Responsible for on time delivery while adhering to quality and productivity goals. Responsible for adhering to guidelines and checklists for all deliverable reviews, sending status report to team lead and following relevant organizational processes. Responsible for customer collaboration and interactions and support to customer queries. Expected to enhance technical capabilities by attending trainings, self-study and periodic technical assessments. Expected to participate in technical initiatives related to project and organization and deliver training as per plan and quality. Education and Experience Required Engineering graduate, MCA, etc Experience: 2-5 years Competencies Description Data Science TCB is applicable to one who 1) Analyses data to arrive at patterns/Insights/models 2) Come up with models based on the data to provide recommendations, predictive analytics etc 3) Provides implementation of the models in R, Matlab etc 4) Can understand and apply machine learning/AI techniques Platforms- Unix Technology Standard- NA Tools- R, Matlab, Spark Machine Learning, Python-ML, SPSS, SAS Languages- R, Perl, Python, Scala Specialization- COGNITIVE ANALYTICS INCLUDING COMPUTER VISION, AI and ML, STATISTICS.

Posted 1 week ago

Apply

Exploring Scala Jobs in India

Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.

Average Salary Range

The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead

As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.

Related Skills

In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts

Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.

Interview Questions

Here are 25 interview questions that you may encounter when applying for Scala roles:

  • What is Scala and why is it used? (basic)
  • Explain the difference between val and var in Scala. (basic)
  • What is pattern matching in Scala? (medium)
  • What are higher-order functions in Scala? (medium)
  • How does Scala support functional programming? (medium)
  • What is a case class in Scala? (basic)
  • Explain the concept of currying in Scala. (advanced)
  • What is the difference between map and flatMap in Scala? (medium)
  • How does Scala handle null values? (medium)
  • What is a trait in Scala and how is it different from an abstract class? (medium)
  • Explain the concept of implicits in Scala. (advanced)
  • What is the Akka toolkit and how is it used in Scala? (medium)
  • How does Scala handle concurrency? (advanced)
  • Explain the concept of lazy evaluation in Scala. (advanced)
  • What is the difference between List and Seq in Scala? (medium)
  • How does Scala handle exceptions? (medium)
  • What are Futures in Scala and how are they used for asynchronous programming? (advanced)
  • Explain the concept of type inference in Scala. (medium)
  • What is the difference between object and class in Scala? (basic)
  • How can you create a Singleton object in Scala? (basic)
  • What is a higher-kinded type in Scala? (advanced)
  • Explain the concept of for-comprehensions in Scala. (medium)
  • How does Scala support immutability? (medium)
  • What are the advantages of using Scala over Java? (basic)
  • How do you implement pattern matching in Scala? (medium)

Closing Remark

As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies