Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title :Performance Tester Key Skills :AWS,Jmeter,AppDynamics, New Relic, Splunk, DataDog Job Locations :Chennai,Pune Experience : 65-7 Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: Experience, Skills and Qualifications: β’ Performance Engineering, testing and tuning cloud hosted digital platforms (e.g. AWS) β’ Working knowledge (preferably with an AWS Solutions Architect certification) on Cloud Platforms like AWS and AWS Key Services and DevOps tools like CloudFormation, Teraform β’ Performance engineering and testing of web Apps (Linux) Performance testing and tuning web-based applications β’ Performance engineering toolsets such as JMeter, Microfocus Performance Center, BrowserStack, Taurus, Lighthouse, β’ Monitoring/logging tools (such as AppDynamics, New Relic, Splunk, DataDog) β’ Windows / UNIX / Linux / Web / Database / Network performance monitors to diagnose performance issues along with JVM tuning and Heap analysing skills β’ Docker, Kubernetes and Cloud-native development and container orchestration frameworks, Kubernetes clusters, pods & nodes, vertical/horizontal pod autoscaling concepts, High availability β’ Performance Testing and Engineering activity planning, estimating, designing, executing and analysing output from performance tests β’ Working in an agile environment, "DevOps" team or a similar multi-skilled team in a technically demanding function β’ Jenkins and CI-CD Pipelines including Pipeline scripting β’ Chaos engineering using tools like Chaos toolkit, AWS Fault Injection Simulator, Gremlin etc. β’ Programming and scripting language skills in Java, Shell, Scala, Groovy, Python and knowledge of security mechanisms such as OAuth etc. β’ Tools like GitHub, Jira & Confluence β’ Assisting with Resiliency Production support teams and Performance Incident Root Cause Analysis β’ Ability to prioritize work effectively and deliver within agreed service levels in a diverse and ever-changing environment β’ High levels of judgment and decision making, being able to rationalize and present the background and reasoning for direction taken β’ Strong stakeholder management and excellent communication skills. β’ Extensive knowledge of risk management and mitigation β’ Strong analytical and problem-solving skills Show more Show less
Posted 4 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You will do β Create beautiful software experiences for our clients using design thinking, lean and agile methodology. β Work on software products designed from scratch using the latest cutting edge technologies, platforms and languages such as JAVA, python, Javascript, GoLang and Scala. β Work in a dynamic, collaborative, transparent, non-hierarchical culture. β Work in collaborative, fast-paced and value-driven teams to build innovative customer experiences for our clients. β Help to grow the next generation of developers and have a positive impact on the industry. Basic Qualifications β Experience: 4+ years. β Hands-on development experience with a broad mix of languages such as JAVA, Python, Javascript etc. β Server-side development experience mainly in JAVA, (Python and nodeJS can be considerable) β UI development experience in ReactJS or AngularJS or PolymerJS or EmberJS or Jquery etc. are good to have. β Passion for software engineering and follow the best coding concepts. β Good to great problem solving and communication skills. Nice to have Qualifications β Product and customer-centric mindset. β Great OO skills, including design patterns. β Experience with devops, continuous integration & deployment. β Exposure to big data technologies, Machine Learning and NLP will be a plus. Benefits β Competitive salary. β Work from anywhere. β Learning and gaining experience rapidly. β Reimbursement for basic working set up at home. β Insurance (including a top up insurance for COVID). Location Hybrid- Mumbai / Pune / Bangalore / Hyderabad Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Sanofi We are an innovative global healthcare company, driven by one purpose: we chase the miracles of science to improve peopleβs lives. Our team, across some 100 countries, is dedicated to transforming the practice of medicine by working to turn the impossible into the possible. We provide potentially life-changing treatment options and life-saving vaccine protection to millions of people globally, while putting sustainability and social responsibility at the center of our ambitions. Sanofi has recently embarked into a vast and ambitious digital transformation program. A cornerstone of this roadmap is the acceleration of its data transformation and of the adoption of artificial intelligence (AI) and machine learning (ML) solutions that will accelerate Manufacturing & Supply performance and help bring drugs and vaccines to patients faster, to improve health and save lives. Who You Are: You are a dynamic Data Engineer interested in challenging the status quo to design and develop globally scalable solutions that are needed by Sanofiβs advanced analytic, AI and ML initiatives for the betterment of our global patients and customers. You are a valued influencer and leader who has contributed to making key datasets available to data scientists, analysts, and consumers throughout the enterprise to meet vital business use needs. You have a keen eye for improvement opportunities while continuing to fully comply with all data quality, security, and governance standards. Our vision for digital, data analytics and AI Join us on our journey in enabling Sanofiβs Digital Transformation through becoming an AI first organization. This means: AI Factory - Versatile Teams Operating in Cross Functional Pods: Utilizing digital and data resources to develop AI products, bringing data management, AI and product development skills to products, programs and projects to create an agile, fulfilling and meaningful work environment. Leading Edge Tech Stack: Experience building products that will be deployed globally on a leading-edge tech stack. World Class Mentorship and Training: Working with renowned leaders and academics in machine learning to further develop your skillsets There are multiple vacancies across our Digital profiles and NA region. Further assessments will be completed to determine specific function and level of hired candidates. Job Highlights Propose and establish technical designs to meet business and technical requirements Develop and maintain data engineering solutions based on requirements and design specifications using appropriate tools and technologies Create data pipelines / ETL pipelines and optimize performance Test and validate developed solution to ensure it meets requirements Create design and development documentation based on standards for knowledge transfer, training, and maintenance Work with business and products teams to understand requirements, and translate them into technical needs Adhere to and promote to best practices and standards for code management, automated testing, and deployments Leverage existing or create new standard data pipelines within Sanofi to bring value through business use cases Develop automated tests for CI/CD pipelines Gather/organize large & complex data assets, and perform relevant analysis Conduct peer reviews for quality, consistency, and rigor for production level solution Actively contribute to Data Engineering community and define leading practices and frameworks Communicate results and findings in a clear, structured manner to stakeholders Remains up to date on the companyβs standards, industry practices and emerging technologies Key Functional Requirements & Qualifications Experience working cross-functional teams to solve complex data architecture and engineering problems Demonstrated ability to learn new data and software engineering technologies in short amount of time Good understanding of agile/scrum development processes and concepts Able to work in a fast-paced, constantly evolving environment and manage multiple priorities Strong technical analysis and problem-solving skills related to data and technology solutions Excellent written, verbal, and interpersonal skills with ability to communicate ideas, concepts and solutions to peers and leaders Pragmatic and capable of solving complex issues, with technical intuition and attention to detail Service-oriented, flexible, and approachable team player Fluent in English (Other languages a plus) Key Technical Requirements & Qualifications Bachelorβs Degree or equivalent in Computer Science, Engineering, or relevant field 4 to 5+ years of experience in data engineering, integration, data warehousing, business intelligence, business analytics, or comparable role with relevant technologies and tools, such as Spark/Scala, Informatica/IICS/Dbt Understanding of data structures and algorithms Working knowledge of scripting languages (Python, Shell scripting) Experience in cloud-based data platforms (Snowflake is a plus) Experience with job scheduling and orchestration (Airflow is a plus) Good knowledge of SQL and relational databases technologies/concepts Experience working with data models and query tuning Nice To Haves Experience working in life sciences/pharmaceutical industry is a plus Familiarity with data ingestion through batch, near real-time, and streaming environments Familiarity with data warehouse concepts and architectures (data mesh a plus) Familiarity with Source Code Management Tools (GitHub a plus) Pursue Progress Discover Extraordinary Better is out there. Better medications, better outcomes, better science. But progress doesnβt happen without people β people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, letβs be those people. Watch our ALL IN video and check out our Diversity, Equity and Inclusion actions at sanofi.com! null Show more Show less
Posted 4 days ago
510.0 years
0 Lacs
Bhopal, Madhya Pradesh, India
On-site
Role : Data Engineers (510 Years of Experience) Experience : 510 years Location : Gurgaon, Pune, Bangalore, Chennai, Jaipur and Bhopal Skills : Python/Scala, SQL, ETL, Big Data (Spark, Kafka, Hive), Cloud (AWS/Azure/GCP), Data Warehousing Responsibilities : Build and maintain robust, scalable data pipelines and systems. Design and implement ETL processes to support analytics and reporting. Optimize data workflows for performance and scalability. Collaborate with data scientists, analysts, and engineering teams. Ensure data quality, governance, and security compliance. Required Skills Strong experience with Python/Scala, SQL, and ETL tools. Hands-on with Big Data technologies (Hadoop, Spark, Kafka, Hive, etc. Proficiency in Cloud Platforms (AWS/GCP/Azure). Experience with data warehousing (e.g., Redshift, Snowflake, BigQuery). Familiarity with CI/CD pipelines and version control systems. Nice To Have Experience with Airflow, Databricks, or dbt. Knowledge of real-time data processing (ref:hirist.tech) Show more Show less
Posted 4 days ago
2.0 years
0 Lacs
Greater Chennai Area
On-site
Why CDM Smith? Check out this video and find out why our team loves to work here! Join Us! CDM Smith β where amazing career journeys unfold. Imagine a place committed to offering an unmatched employee experience. Where you work on projects that are meaningful to you. Where you play an active part in shaping your career journey. Where your co-workers are invested in you and your success. Where you are encouraged and supported to do your very best and given the tools and resources to do so. Where itβs a priority that the company takes good care of you and your family. Our employees are the heart of our company. As an employer of choice, our goal is to provide a challenging, progressive and inclusive work environment which fosters personal leadership, career growth and development for every employee. We value passionate individuals who challenge the norm, deliver world-class solutions and bring diverse perspectives. Join our team, and together we will make a difference and change the world. Job Description CDM Smith is seeking an Artificial Intelligence/Machine Learning Engineer to join our Digital Engineering Solutions team. This individual will be part of the Data Technology group within the Digital Engineering Solutions team, helping to drive strategic Architecture, Engineering and Construction (AEC) initiatives using cutting-edge data technologies and analytics to deliver actionable business insights and robust solutions for AEC professionals and client outcomes. The Data Technology group will lead the firm in AEC-focused Business Intelligence and data services by providing architectural guidance, technological vision, and solution development. The Data Technology group will specifically utilize advanced analytics, data science, and AI/ML to give our business and our products a competitive advantage. It includes understanding and managing the data, how it interconnects, and architecting & engineering data for self-serve BI and BA opportunities. This position is for a person who has demonstrated excellence in AI/ML engineering capabilities, experienced with data technology and processes, and enjoys framing a problem, shaping, and creating solutions, and helping to lead and champion implementation. As a member of the Digital Engineering Solutions team, the Data Technology group will also engage in research and development and provide guidance and oversight to the AEC practices at CDM Smith, engaging in new product research, testing, and the incubation of data technology-related ideas that arise from around the company. Key Responsibilities Contributes to advanced analytics and uses artificial intelligence (AI) and machine learning (ML) solution techniques that address complex business challenges, particularly within the AEC domain. Apply state-of-the-art algorithms and techniques such as deep learning, NLP, computer vision, and time-series analysis for domain-specific use cases. Analyzes large datasets to identify patterns and trends. Participates in the testing and validation of AI model accuracy and reliability to ensure models perform in line with business requirements and expectations. Assist with AI/ML workflows optimization by implementing MLOps practices, including CI/CD pipelines, model retraining, and version control. Collaborate with Data Engineers, Data Scientists, and other stakeholders to design and implement end-to-end AI/ML solutions. Stay abreast of the latest developments and advancements, including new and emerging technologies & best practices and new tools & software applications and how they could impact CDM Smith. Assist with the development of documentation, standards, best practices, and workflows for data technology hardware/software in use across the business. Performs other duties as required. Skills And Abilities Good understanding of the software development life cycle. Basic experience with building and deploying machine learning models using frameworks such as TensorFlow, PyTorch, or Scikit-learn. Basic experience with cloud-based AI/ML services, particularly in Microsoft Azure and Databricks. Basic experience with programming languages (ex: R, Python, Scala, etc.). Knowledge of MLOps practices, including automated pipelines, model versioning, monitoring, and lifecycle management. Knowledge of data privacy, security, and ethical AI principles, ensuring compliance with relevant standards. Excellent problem-solving and critical thinking skills to identify and address technical challenges effectively. Strong critical thinking skills to generate innovative solutions and improve business processes. Ability to effectively communicate complex technical concepts to both technical and non-technical audiences. Detail oriented with the ability to assist with executing highly complex or specialized projects. Minimum Qualifications Bachelorβs degree. 0 β 2 years of related experience. Equivalent additional related experience will be considered in lieu of a degree. Amount Of Travel Required 0% Background Check and Drug Testing Information CDM Smith Inc. and its divisions and subsidiaries (hereafter collectively referred to as βCDM Smithβ) reserves the right to require background checks including criminal, employment, education, licensure, etc. as well as credit and motor vehicle when applicable for certain positions. In addition, CDM Smith may conduct drug testing for designated positions. Background checks are conducted after an offer of employment has been made in the United States. The timing of when background checks will be conducted on candidates for positions outside the United States will vary based on country statutory law but in no case, will the background check precede an interview. CDM Smith will conduct interviews of qualified individuals prior to requesting a criminal background check, and no job application submitted prior to such interview shall inquire into an applicant's criminal history. If this position is subject to a background check for any convictions related to its responsibilities and requirements, employment will be contingent upon successful completion of a background investigation including criminal history. Criminal history will not automatically disqualify a candidate. In addition, during employment individuals may be required by CDM Smith or a CDM Smith client to successfully complete additional background checks, including motor vehicle record as well as drug testing. Agency Disclaimer All vendors must have a signed CDM Smith Placement Agreement from the CDM Smith Recruitment Center Manager to receive payment for your placement. Verbal or written commitments from any other member of the CDM Smith staff will not be considered binding terms. All unsolicited resumes sent to CDM Smith and any resume submitted to any employee outside of CDM Smith Recruiting Center Team (RCT) will be considered property of CDM Smith. CDM Smith will not be held liable to pay a placement fee. Business Unit COR Group COR Assignment Category Fulltime-Regular Employment Type Regular Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
You Lead the Way. We've Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities, and each other. Here, you'll learn and grow as we help you create a career journey that's unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you'll be recognized for your contributions, leadership, and impactβevery colleague has the opportunity to share in the company's success. Together, we'll win as a team, striving to uphold our company values and powerful backing promise to provide the world's best customer experience every day. And we'll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. American Express has embarked on an exciting transformation driven by an energetic new team of high performers. This is a great opportunity to join the Customer Marketing organization within American Express Technologies and become a driver of this exciting journey. We are looking for a highly skilled and experienced Senior Engineer with a history of building Bigdata, GCP Cloud, Python and Spark applications. The Senior Engineer will play a crucial role in designing, implementing, and optimizing data solutions to support our organization's data-driven initiatives. This role requires expertise in data engineering, strong problem-solving abilities, and a collaborative mindset to work effectively with various stakeholders. Joining the Enterprise Marketing team, this role will be focused on the delivery of innovative solutions to satisfy the needs of our business. As an agile team we work closely with our business partners to understand what they require, and we strive to continuously improve as a team. We pride ourselves on a culture of kindness and positivity, and a continuous focus on supporting colleague development to help you achieve your career goals. We lead with integrity, and we emphasize work/life balance for all of our teammates. How will you make an impact in this role? There are hundreds of opportunities to make your mark on technology and life at American Express. Here's just some of what you'll be doing: As a part of our team, you will be developing innovative, high quality, and robust operational engineering capabilities. Develop software in our technology stack which is constantly evolving but currently includes Big data, Spark, Python, Scala, GCP, Adobe Suit ( like Customer Journey Analytics ). Work with Business partners and stakeholders to understand functional requirements, architecture dependencies, and business capability roadmaps. Create technical solution designs to meet business requirements. Define best practices to be followed by team. Taking your place as a core member of an Agile team driving the latest development practices Identify and drive reengineering opportunities, and opportunities for adopting new technologies and methods. Suggest and recommend solution architecture to resolve business problems. Perform peer code review and participate in technical discussions with the team on the best solutions possible. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers' digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. American Express offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology of #TeamAmex. Minimum Qualifications : Β· BS or MS degree in computer science, computer engineering, or other technical discipline, or equivalent work experience. Β· 8+ years of hands-on software development experience with Big Data & Analytics solutions β Hadoop Hive, Spark, Scala, Hive, Python, shell scripting, GCP Cloud Big query, Big Table, Airflow. Β· Working knowledge of Adobe suit like Adobe Experience Platform, Adobe Customer Journey Analytics Β· Proficiency in SQL and database systems, with experience in designing and optimizing data models for performance and scalability. Β· Design and development experience with Kafka, Real time ETL pipeline, API is desirable. Β· Experience in designing, developing, and optimizing data pipelines for large-scale data processing, transformation, and analysis using Big Data and GCP technologies. Β· Certifications in cloud platform (GCP Professional Data Engineer) is a plus. Β· Understanding of distributed (multi-tiered) systems, data structures, algorithms & Design Patterns. Β· Strong Object-Oriented Programming skills and design patterns. Β· Experience with CICD pipelines, Automated test frameworks, and source code management tools (XLR, Jenkins, Git, Maven). Β· Good knowledge and experience with configuration management tools like GitHub Β· Ability to analyze complex data engineering problems, propose effective solutions, and implement them effectively. Β· Looks proactively beyond the obvious for continuous improvement opportunities. Β· Communicates effectively with product and cross functional team. Β· Willingness to learn new technologies and leverage them to their optimal potential. Β· Understanding of various SDLC methodologies, familiarity with Agile & scrum ceremonies. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrowβpeople with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description This position participates in the support of batch and real-time data pipelines utilizing various data analytics processing frameworks in support of data science practices for Marketing and Finance business units. This position supports the integration of data from various data sources, as well as performs extract, transform, load (ETL) data conversions, and facilitates data cleansing and enrichment. This position performs full systems life cycle management activities, such as analysis, technical requirements, design, coding, testing, implementation of systems and applications software. This position participates and contributes to synthesizing disparate data sources to support reusable and reproducible data assets. Responsibilities Supervises and supports data engineering projects and builds solutions by leveraging a strong foundational knowledge in software/application development. Develops and delivers data engineering documentation. Gathers requirements, defines the scope, and performs the integration of data for data engineering projects. Recommends analytic reporting products/tools and supports the adoption of emerging technology. Performs data engineering maintenance and support. Provides the implementation strategy and executes backup, recovery, and technology solutions to perform analysis. Performs ETL tool capabilities with the ability to pull data from various sources and perform a load of the transformed data into a database or business intelligence platform. Required Qualifications Codes using programming language used for statistical analysis and modeling such as Python/Java/Scala/C# Strong understanding of database systems and data warehousing solutions. Strong understanding of the data interconnections between organizationsβ operational and business functions. Strong understanding of the data life cycle stages - data collection, transformation, analysis, storing the data securely, providing data accessibility Strong understanding of the data environment to ensure that it can scale for the following demands: Throughput of data, increasing data pipeline throughput, analyzing large amounts of data, Real-time predictions, insights and customer feedback, data security, data regulations, and compliance. Strong knowledge of data structures, as well as data filtering and data optimization. Strong understanding of analytic reporting technologies and environments (e.g., PBI, Looker, Qlik, etc.) Strong understanding of a cloud services platform (e.g., GCP, or AZURE, or AWS) and all the data life cycle stages. Azure Preferred. Understanding of distributed systems and the underlying business problem being addressed, as well as guides team members on how their work will assist by performing data analysis and presenting findings to the stakeholders. Bachelorβs degree in MIS, mathematics, statistics, or computer science, international equivalent, or equivalent job experience. Required Skills 3 years of experience with Databricks Other required experience includes: SSIS/SSAS, Apache Spark, Python, R and SQL, SQL Server Preferred Skills Scala, DeltaLake Unity Catalog, Azure Logic Apps, Cloud Services Platform (e.g., GCP, or AZURE, or AWS) Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Avant de postuler Γ un emploi, sΓ©lectionnez votre langue de prΓ©fΓ©rence parmi les options disponibles en haut Γ droite de cette page. DΓ©couvrez votre prochaine opportunitΓ© au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunitΓ©s innovantes, dΓ©couvrez notre culture enrichissante et travaillez avec des Γ©quipes talentueuses qui vous poussent Γ vous dΓ©velopper chaque jour. Nous savons ce quβil faut faire pour diriger UPS vers l'avenir : des personnes passionnΓ©es dotΓ©es dβune combinaison unique de compΓ©tences. Si vous avez les qualitΓ©s, de la motivation, de l'autonomie ou le leadership pour diriger des Γ©quipes, il existe des postes adaptΓ©s Γ vos aspirations et Γ vos compΓ©tences d'aujourd'hui et de demain. Fiche De Poste This position participates in the support of batch and real-time data pipelines utilizing various data analytics processing frameworks in support of data science practices for Marketing and Finance business units. This position supports the integration of data from various data sources, as well as performs extract, transform, load (ETL) data conversions, and facilitates data cleansing and enrichment. This position performs full systems life cycle management activities, such as analysis, technical requirements, design, coding, testing, implementation of systems and applications software. This position participates and contributes to synthesizing disparate data sources to support reusable and reproducible data assets. Responsibilities Supervises and supports data engineering projects and builds solutions by leveraging a strong foundational knowledge in software/application development. Develops and delivers data engineering documentation. Gathers requirements, defines the scope, and performs the integration of data for data engineering projects. Recommends analytic reporting products/tools and supports the adoption of emerging technology. Performs data engineering maintenance and support. Provides the implementation strategy and executes backup, recovery, and technology solutions to perform analysis. Performs ETL tool capabilities with the ability to pull data from various sources and perform a load of the transformed data into a database or business intelligence platform. Required Qualifications Codes using programming language used for statistical analysis and modeling such as Python/Java/Scala/C# Strong understanding of database systems and data warehousing solutions. Strong understanding of the data interconnections between organizationsβ operational and business functions. Strong understanding of the data life cycle stages - data collection, transformation, analysis, storing the data securely, providing data accessibility Strong understanding of the data environment to ensure that it can scale for the following demands: Throughput of data, increasing data pipeline throughput, analyzing large amounts of data, Real-time predictions, insights and customer feedback, data security, data regulations, and compliance. Strong knowledge of data structures, as well as data filtering and data optimization. Strong understanding of analytic reporting technologies and environments (e.g., PBI, Looker, Qlik, etc.) Strong understanding of a cloud services platform (e.g., GCP, or AZURE, or AWS) and all the data life cycle stages. Azure Preferred. Understanding of distributed systems and the underlying business problem being addressed, as well as guides team members on how their work will assist by performing data analysis and presenting findings to the stakeholders. Bachelorβs degree in MIS, mathematics, statistics, or computer science, international equivalent, or equivalent job experience. Required Skills 3 years of experience with Databricks Other required experience includes: SSIS/SSAS, Apache Spark, Python, R and SQL, SQL Server Preferred Skills Scala, DeltaLake Unity Catalog, Azure Logic Apps, Cloud Services Platform (e.g., GCP, or AZURE, or AWS) Type De Contrat en CDI Chez UPS, Γ©galitΓ© des chances, traitement Γ©quitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachΓ©s. Show more Show less
Posted 4 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences Youβll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customersβ and the bankβs data safe and secure Participating actively in the data engineering community, youβll deliver opportunities to support our strategic direction while building your network across the bank Weβre recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do Weβll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. Youβll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. Weβll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bankβs data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, youβll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. Youβll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure Youβll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.
These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.
The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.
In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead
As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.
In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts
Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.
Here are 25 interview questions that you may encounter when applying for Scala roles:
As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2