Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Overview Viraaj HR Solutions is a leading recruitment agency fostering a culture of excellence and innovation. Our mission is to connect the best talent with the right opportunities, ensuring mutual growth and success. We pride ourselves on our integrity, responsiveness, and commitment to client satisfaction, working relentlessly to understand the unique needs of each business we partner with. Join us in our journey to redefine talent acquisition and contribute to the success of businesses across various sectors. Job Title: AWS Data Engineer Location: On-Site in India Role Responsibilities Design and develop data pipelines using AWS services. Implement ETL processes for data ingestion and transformation. Manage and optimize large-scale distributed data systems. Create and maintain data models that meet business requirements. Collaborate with data scientists and analysts to understand data needs. Ensure data quality and integrity through validation checks. Monitor and troubleshoot data pipeline issues proactively. Implement data security and compliance measures in line with regulations. Analyze system performance and optimize for efficiency. Prepare technical documentation for data processes and architecture. Participate in architecture and design discussions for data solutions. Research and evaluate new AWS tools and technologies. Work closely with cross-functional teams to align data strategies. Provide support for troubleshooting data-related issues. Stay updated on industry trends in data engineering and cloud technology. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 3+ years of experience as a Data Engineer or in a related role. Proficiency in AWS services such as S3, EC2, Glue, and Redshift. Strong knowledge of SQL and database design principles. Experience with Python and ETL frameworks. Familiarity with data warehousing concepts and solutions. Understanding of data governance and best practices. Hands-on experience with big data technologies such as Hadoop or Spark. Ability to work independently and in a team environment. Excellent problem-solving skills and attention to detail. Strong communication skills to articulate complex technical concepts. Experience in Agile methodologies is a plus. Knowledge of machine learning concepts is an added advantage. Ability to adapt to a fast-paced and evolving environment. Willingness to learn new tools and technologies as needed. Skills: data modeling,cloud computing,scala,aws data engineer,problem-solving,sql proficiency,data analysis,database management,spark,big data technologies (hadoop, spark),sql,etl frameworks,database design,python,aws services (s3, ec2, glue, redshift),data governance,communication,data warehousing Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Overview Viraaj HR Solutions is a leading recruitment agency fostering a culture of excellence and innovation. Our mission is to connect the best talent with the right opportunities, ensuring mutual growth and success. We pride ourselves on our integrity, responsiveness, and commitment to client satisfaction, working relentlessly to understand the unique needs of each business we partner with. Join us in our journey to redefine talent acquisition and contribute to the success of businesses across various sectors. Job Title: AWS Data Engineer Location: On-Site in India Role Responsibilities Design and develop data pipelines using AWS services. Implement ETL processes for data ingestion and transformation. Manage and optimize large-scale distributed data systems. Create and maintain data models that meet business requirements. Collaborate with data scientists and analysts to understand data needs. Ensure data quality and integrity through validation checks. Monitor and troubleshoot data pipeline issues proactively. Implement data security and compliance measures in line with regulations. Analyze system performance and optimize for efficiency. Prepare technical documentation for data processes and architecture. Participate in architecture and design discussions for data solutions. Research and evaluate new AWS tools and technologies. Work closely with cross-functional teams to align data strategies. Provide support for troubleshooting data-related issues. Stay updated on industry trends in data engineering and cloud technology. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 3+ years of experience as a Data Engineer or in a related role. Proficiency in AWS services such as S3, EC2, Glue, and Redshift. Strong knowledge of SQL and database design principles. Experience with Python and ETL frameworks. Familiarity with data warehousing concepts and solutions. Understanding of data governance and best practices. Hands-on experience with big data technologies such as Hadoop or Spark. Ability to work independently and in a team environment. Excellent problem-solving skills and attention to detail. Strong communication skills to articulate complex technical concepts. Experience in Agile methodologies is a plus. Knowledge of machine learning concepts is an added advantage. Ability to adapt to a fast-paced and evolving environment. Willingness to learn new tools and technologies as needed. Skills: data modeling,cloud computing,scala,aws data engineer,problem-solving,sql proficiency,data analysis,database management,spark,big data technologies (hadoop, spark),sql,etl frameworks,database design,python,aws services (s3, ec2, glue, redshift),data governance,communication,data warehousing Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Sapiens is on the lookout for a Lead Data Scientist to become a key player in our Bangalore team. If you're a seasoned Data Scientist pro and ready to take your career to new heights with an established, globally successful company, this role could be the perfect fit. Location: Bangalore Working Model: Our flexible work arrangement combines both remote and in-office work, optimizing flexibility and productivity. This position will be part of Sapiens Digital division, for more information about it, click here: https://sapiens.com/solutions/data-and-analytics/ Criteria’s Job Requirements General Job Description As the Lead Data Scientist in Sapiens, your primary responsibility is to spearhead the development and implementation of cutting-edge data analytics and machine learning solutions to drive informed decision-making and enhance overall business performance. You will lead a team of data scientists in leveraging advanced statistical models and predictive analytics to assess risk, optimize pricing strategies, and streamline underwriting processes. Collaborating closely with cross-functional teams, you will play a pivotal role in identifying opportunities to leverage data for business growth, customer retention, and fraud detection. Additionally, as a key stakeholder in the development of data-driven strategies, you will contribute to the continuous improvement of underwriting models, claims processing, and customer segmentation. With a focus on innovation, you will stay abreast of industry trends and emerging technologies, ensuring that the organization remains at the forefront of data science advancements within the insurance sector. Your leadership will be instrumental in driving a culture of data-driven decision-making and fostering collaboration between data science and other business functions to achieve strategic objectives. Pre - Requisites Knowledge & Experience Master's or Ph.D. in Computer Science, Statistics, or a related field. 3+ years of experience as Lead Data Scientist. Proficient in statistical modeling, machine learning algorithms, and data manipulation techniques relevant to insurance analytics. Experience in deploying models to production, ensuring scalability, reliability, and integration with existing business processes. Previous background in working with AI/ML models with Insurance industry. Proficiency in Python and relevant libraries/frameworks (e.g., TensorFlow, PyTorch). Familiarity with big data platforms (e.g., Hadoop, Spark) for handling and analyzing large datasets efficiently. Solid understanding of data management, governance, and security. Knowledge of regulatory compliance in AI/ML practices. Required Product/project Knowledge Understanding of the insurance industry and its processes. Knowledge of data science, statistics and machine learning applications in the insurance domain Required Skills Programming: Proficient in Python. Tools/Frameworks: Experience with 3 out of DataBricks, ML Flow, TensorFlow, PyTorch, GPT, LLM and other relevant tools. Leadership: Ability to lead and mentor a team. Strategic Thinking: Develop and execute AI/ML strategies aligned with business objectives. Data Management: Ensure data availability and quality for model training. Evaluation: Assess applicability of AI/ML technologies and recommend tools/frameworks Common Tasks Collaborate with product managers, customers, and other stakeholders. Implement monitoring systems for model performance. Evaluate and recommend AI/ML technologies. Oversee model development from ideation to deployment. Collaborate with IT and data governance teams. Required Soft Skills Leadership: Lead and mentor a team of data scientists and engineers. Communication: Collaborate with cross-functional teams and stakeholders. Innovation: Drive innovation in AI/ML strategies for insurance products. Adaptability: Stay updated on the latest advancements in AI/ML technologies. Ethics: Ensure AI/ML practices comply with industry regulations and ethical standards. About Sapiens Sapiens is a global leader in the insurance industry, delivering its award-winning, cloud-based SaaS insurance platform to over 600 customers in more than 30 countries. Sapiens’ platform offers pre-integrated, low-code capabilities to accelerate customers’ digital transformation. With more than 40 years of industry expertise, Sapiens has a highly professional team of over 5,000 employees globally. For More information visit us on www.sapiens.com . Disclaimer: Sapiens India does not authorise any third parties to release employment offers or conduct recruitment drives via a third party. Hence, beware of inauthentic and fraudulent job offers or recruitment drives from any individuals or websites purporting to represent Sapiens . Further, Sapiens does not charge any fee or other emoluments for any reason (including without limitation, visa fees) or seek compensation from educational institutions to participate in recruitment events. Accordingly, please check the authenticity of any such offers before acting on them and where acted upon, you do so at your own risk. Sapiens shall neither be responsible for honouring or making good the promises made by fraudulent third parties, nor for any monetary or any other loss incurred by the aggrieved individual or educational institution. In the event that you come across any fraudulent activities in the name of Sapiens , please feel free report the incident at sapiens to sharedservices@sapiens.com . Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Overview Viraaj HR Solutions is a leading recruitment agency fostering a culture of excellence and innovation. Our mission is to connect the best talent with the right opportunities, ensuring mutual growth and success. We pride ourselves on our integrity, responsiveness, and commitment to client satisfaction, working relentlessly to understand the unique needs of each business we partner with. Join us in our journey to redefine talent acquisition and contribute to the success of businesses across various sectors. Job Title: AWS Data Engineer Location: On-Site in India Role Responsibilities Design and develop data pipelines using AWS services. Implement ETL processes for data ingestion and transformation. Manage and optimize large-scale distributed data systems. Create and maintain data models that meet business requirements. Collaborate with data scientists and analysts to understand data needs. Ensure data quality and integrity through validation checks. Monitor and troubleshoot data pipeline issues proactively. Implement data security and compliance measures in line with regulations. Analyze system performance and optimize for efficiency. Prepare technical documentation for data processes and architecture. Participate in architecture and design discussions for data solutions. Research and evaluate new AWS tools and technologies. Work closely with cross-functional teams to align data strategies. Provide support for troubleshooting data-related issues. Stay updated on industry trends in data engineering and cloud technology. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 3+ years of experience as a Data Engineer or in a related role. Proficiency in AWS services such as S3, EC2, Glue, and Redshift. Strong knowledge of SQL and database design principles. Experience with Python and ETL frameworks. Familiarity with data warehousing concepts and solutions. Understanding of data governance and best practices. Hands-on experience with big data technologies such as Hadoop or Spark. Ability to work independently and in a team environment. Excellent problem-solving skills and attention to detail. Strong communication skills to articulate complex technical concepts. Experience in Agile methodologies is a plus. Knowledge of machine learning concepts is an added advantage. Ability to adapt to a fast-paced and evolving environment. Willingness to learn new tools and technologies as needed. Skills: data modeling,cloud computing,scala,aws data engineer,problem-solving,sql proficiency,data analysis,database management,spark,big data technologies (hadoop, spark),sql,etl frameworks,database design,python,aws services (s3, ec2, glue, redshift),data governance,communication,data warehousing Show more Show less
Posted 1 week ago
15.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Years of Experience: Candidates with 15+ years of hands on experience Must Have Familiarity with the CCaaS domain, contact center operations, customer experience metrics, and industry-specific challenges Understanding of conversational (chats, emails and calls) data to train Conversational AI systems In-depth knowledge of CCaaS platforms like NiceCX, Genesys, Cisco etc., including their architecture, functionalities, and integration capabilities Familiarity with contact center metrics, such as average handle time (AHT), first call resolution (FCR), and customer satisfaction (CSAT) Familiarity with sentiment analysis, topic modeling, and text classification techniques Proficiency in data visualization tools like Tableau, Power BI, Quicksight and others Understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and their services for scalable data storage, processing, and analytics NLU Verticals Expertise: ASR generation, SSML modeling, Intent Analytics, conversational AI testing, Agent Assist, Proactive Outreach Orchestration, and Generative AI Apply advanced statistical and machine learning techniques to analyze large datasets and develop predictive models and algorithms, enhancing contact center performance. Nice To Have Proficiency in programming languages such as Python/Pyspark/R/SQL Strong understanding of data science principles, statistical analysis, and machine learning techniques. Experience in predictive modeling Skilled in techniques like regression analysis, time series forecasting, clustering and NLP techniques Knowledge of distributed computing frameworks like Hadoop and Spark for processing large volumes of data. Understanding of NoSQL databases (e.g., MongoDB, Cassandra) for handling unstructured and semi-structured data. Awareness of data security best practices, encryption techniques, and compliance regulations (e.g., GDPR, CCPA). Understanding of ethical considerations in data science and responsible AI practices. Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Overview Viraaj HR Solutions is a forward-thinking recruitment agency specializing in connecting talent with opportunities across various industries. Our mission is to empower individuals through meaningful employment, while fostering growth for businesses through innovative talent acquisition strategies. We value integrity, collaboration, and excellence in our operations. As part of our commitment to delivering exceptional HR solutions, we are currently seeking an Azure Data Engineer to join our client on-site in India. Role Responsibilities Design and implement data solutions on Microsoft Azure. Develop ETL processes to extract, transform, and load data efficiently. Perform data modeling and database design to support analytics and reporting. Collaborate with stakeholders to gather requirements and translate them into technical specifications. Optimize existing data pipelines for performance and reliability. Ensure data integrity and consistency through robust validation checks. Maintain and troubleshoot data integration processes. Implement data governance and security best practices. Work with big data technologies to manage large data sets. Document all technical processes and data architecture. Utilize Azure Data Factory and other Azure services for data management. Conduct performance tuning of SQL queries and data flows. Participate in design reviews and code reviews. Stay current with Azure updates and analyze their potential impact on existing solutions. Provide support and training to junior data engineers. Qualifications Bachelor's degree in Computer Science or a related field. 3+ years of experience in data engineering or a related role. Proficiency in Azure Data Factory and Azure SQL Database. Strong knowledge of SQL and relational databases. Experience with ETL tools and processes. Familiarity with data warehousing concepts. Hands-on experience with big data technologies like Hadoop or Spark. Knowledge of Python or other scripting languages. Understanding of data modeling concepts and techniques. Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities. Ability to work independently and as part of a team. Experience with data governance practices. Certifications in Azure or data engineering are a plus. Familiarity with Agile methodologies. Skills: big data technologies,azure data engineer,agile methodologies,relational databases,etl processes,sql server,python scripting,database design,azure databricks,sql,data modeling,microsoft azure,spark,azure data factory,data warehousing,data governance,python,hadoop,adf Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Overview Viraaj HR Solutions is a leading recruitment agency fostering a culture of excellence and innovation. Our mission is to connect the best talent with the right opportunities, ensuring mutual growth and success. We pride ourselves on our integrity, responsiveness, and commitment to client satisfaction, working relentlessly to understand the unique needs of each business we partner with. Join us in our journey to redefine talent acquisition and contribute to the success of businesses across various sectors. Job Title: AWS Data Engineer Location: On-Site in India Role Responsibilities Design and develop data pipelines using AWS services. Implement ETL processes for data ingestion and transformation. Manage and optimize large-scale distributed data systems. Create and maintain data models that meet business requirements. Collaborate with data scientists and analysts to understand data needs. Ensure data quality and integrity through validation checks. Monitor and troubleshoot data pipeline issues proactively. Implement data security and compliance measures in line with regulations. Analyze system performance and optimize for efficiency. Prepare technical documentation for data processes and architecture. Participate in architecture and design discussions for data solutions. Research and evaluate new AWS tools and technologies. Work closely with cross-functional teams to align data strategies. Provide support for troubleshooting data-related issues. Stay updated on industry trends in data engineering and cloud technology. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 3+ years of experience as a Data Engineer or in a related role. Proficiency in AWS services such as S3, EC2, Glue, and Redshift. Strong knowledge of SQL and database design principles. Experience with Python and ETL frameworks. Familiarity with data warehousing concepts and solutions. Understanding of data governance and best practices. Hands-on experience with big data technologies such as Hadoop or Spark. Ability to work independently and in a team environment. Excellent problem-solving skills and attention to detail. Strong communication skills to articulate complex technical concepts. Experience in Agile methodologies is a plus. Knowledge of machine learning concepts is an added advantage. Ability to adapt to a fast-paced and evolving environment. Willingness to learn new tools and technologies as needed. Skills: data modeling,cloud computing,scala,aws data engineer,problem-solving,sql proficiency,data analysis,database management,spark,big data technologies (hadoop, spark),sql,etl frameworks,database design,python,aws services (s3, ec2, glue, redshift),data governance,communication,data warehousing Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Overview Viraaj HR Solutions is a leading recruitment agency fostering a culture of excellence and innovation. Our mission is to connect the best talent with the right opportunities, ensuring mutual growth and success. We pride ourselves on our integrity, responsiveness, and commitment to client satisfaction, working relentlessly to understand the unique needs of each business we partner with. Join us in our journey to redefine talent acquisition and contribute to the success of businesses across various sectors. Job Title: AWS Data Engineer Location: On-Site in India Role Responsibilities Design and develop data pipelines using AWS services. Implement ETL processes for data ingestion and transformation. Manage and optimize large-scale distributed data systems. Create and maintain data models that meet business requirements. Collaborate with data scientists and analysts to understand data needs. Ensure data quality and integrity through validation checks. Monitor and troubleshoot data pipeline issues proactively. Implement data security and compliance measures in line with regulations. Analyze system performance and optimize for efficiency. Prepare technical documentation for data processes and architecture. Participate in architecture and design discussions for data solutions. Research and evaluate new AWS tools and technologies. Work closely with cross-functional teams to align data strategies. Provide support for troubleshooting data-related issues. Stay updated on industry trends in data engineering and cloud technology. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 3+ years of experience as a Data Engineer or in a related role. Proficiency in AWS services such as S3, EC2, Glue, and Redshift. Strong knowledge of SQL and database design principles. Experience with Python and ETL frameworks. Familiarity with data warehousing concepts and solutions. Understanding of data governance and best practices. Hands-on experience with big data technologies such as Hadoop or Spark. Ability to work independently and in a team environment. Excellent problem-solving skills and attention to detail. Strong communication skills to articulate complex technical concepts. Experience in Agile methodologies is a plus. Knowledge of machine learning concepts is an added advantage. Ability to adapt to a fast-paced and evolving environment. Willingness to learn new tools and technologies as needed. Skills: data modeling,cloud computing,scala,aws data engineer,problem-solving,sql proficiency,data analysis,database management,spark,big data technologies (hadoop, spark),sql,etl frameworks,database design,python,aws services (s3, ec2, glue, redshift),data governance,communication,data warehousing Show more Show less
Posted 1 week ago
15.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Years of Experience: Candidates with 15+ years of hands on experience Must Have Familiarity with the CCaaS domain, contact center operations, customer experience metrics, and industry-specific challenges Understanding of conversational (chats, emails and calls) data to train Conversational AI systems In-depth knowledge of CCaaS platforms like NiceCX, Genesys, Cisco etc., including their architecture, functionalities, and integration capabilities Familiarity with contact center metrics, such as average handle time (AHT), first call resolution (FCR), and customer satisfaction (CSAT) Familiarity with sentiment analysis, topic modeling, and text classification techniques Proficiency in data visualization tools like Tableau, Power BI, Quicksight and others Understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and their services for scalable data storage, processing, and analytics NLU Verticals Expertise: ASR generation, SSML modeling, Intent Analytics, conversational AI testing, Agent Assist, Proactive Outreach Orchestration, and Generative AI Apply advanced statistical and machine learning techniques to analyze large datasets and develop predictive models and algorithms, enhancing contact center performance. Nice To Have Proficiency in programming languages such as Python/Pyspark/R/SQL Strong understanding of data science principles, statistical analysis, and machine learning techniques. Experience in predictive modeling Skilled in techniques like regression analysis, time series forecasting, clustering and NLP techniques Knowledge of distributed computing frameworks like Hadoop and Spark for processing large volumes of data. Understanding of NoSQL databases (e.g., MongoDB, Cassandra) for handling unstructured and semi-structured data. Awareness of data security best practices, encryption techniques, and compliance regulations (e.g., GDPR, CCPA). Understanding of ethical considerations in data science and responsible AI practices. Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Overview Viraaj HR Solutions is a leading talent management firm dedicated to providing exceptional human resource services for businesses in various sectors. Our mission is to connect companies with the best talent while fostering a supportive and inclusive workplace culture. We value innovation, integrity, and excellence, ensuring that our clients thrive in a competitive market. Role Overview We are seeking a skilled PySpark Data Engineer to join our dynamic team in India. The ideal candidate will have a strong background in data engineering, with expertise in processing large volumes of data using PySpark. You will play a crucial role in designing and implementing data pipelines and ensuring efficient data storage and retrieval. This position requires strong analytical skills and the ability to work collaboratively in a fast-paced environment. Role Responsibilities Design, develop, and maintain robust data pipelines using PySpark. Implement ETL processes for data extraction, transformation, and loading. Work with structured and unstructured data sources and ensure data quality. Optimize data processing workflows for better performance and efficiency. Collaborate with data scientists and analysts to understand data requirements. Create and manage data warehousing solutions for end-user access. Integrate data from various sources into a unified platform. Monitor and troubleshoot data pipeline performance issues. Conduct data validation and cleansing to maintain data accuracy. Document processes and workflows for team transparency. Stay updated on new technologies and trends in data engineering. Participate in code reviews and ensure best practices in data engineering. Assist in technical designs and architecture for data solutions. Provide technical support and training to junior staff. Contribute to strategic discussions around data architecture and management. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. Proven experience as a Data Engineer or similar role. Strong proficiency in PySpark and the Spark ecosystem. Hands-on experience with SQL and database management systems. Familiarity with Hadoop and big data technologies. Experience with cloud platforms like AWS, Azure, or Google Cloud. Understanding of data modeling and database design concepts. Proficient in Python for data manipulation and analysis. Strong analytical and troubleshooting skills. Excellent communication and teamwork abilities. Knowledge of data governance and compliance practices. Experience in implementing ETL and data warehousing solutions. Able to work in a fast-paced environment and manage multiple priorities. Familiarity with version control systems (e.g., Git). Willingness to learn and adapt to new technologies. This is an exciting opportunity to work with a forward-thinking company that values expertise and innovation in the field of data engineering. If you are passionate about data and eager to contribute to transformative projects, we encourage you to apply today. Skills: big data technologies,etl,database management systems,aws glue,problem-solving skills,google cloud,version control systems,sql,data modeling,aws,azure,pyspark,data engineer,data warehousing,data governance,python,git,hadoop Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
Company Overview Viraaj HR Solutions is a leading talent management firm dedicated to providing exceptional human resource services for businesses in various sectors. Our mission is to connect companies with the best talent while fostering a supportive and inclusive workplace culture. We value innovation, integrity, and excellence, ensuring that our clients thrive in a competitive market. Role Overview We are seeking a skilled PySpark Data Engineer to join our dynamic team in India. The ideal candidate will have a strong background in data engineering, with expertise in processing large volumes of data using PySpark. You will play a crucial role in designing and implementing data pipelines and ensuring efficient data storage and retrieval. This position requires strong analytical skills and the ability to work collaboratively in a fast-paced environment. Role Responsibilities Design, develop, and maintain robust data pipelines using PySpark. Implement ETL processes for data extraction, transformation, and loading. Work with structured and unstructured data sources and ensure data quality. Optimize data processing workflows for better performance and efficiency. Collaborate with data scientists and analysts to understand data requirements. Create and manage data warehousing solutions for end-user access. Integrate data from various sources into a unified platform. Monitor and troubleshoot data pipeline performance issues. Conduct data validation and cleansing to maintain data accuracy. Document processes and workflows for team transparency. Stay updated on new technologies and trends in data engineering. Participate in code reviews and ensure best practices in data engineering. Assist in technical designs and architecture for data solutions. Provide technical support and training to junior staff. Contribute to strategic discussions around data architecture and management. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. Proven experience as a Data Engineer or similar role. Strong proficiency in PySpark and the Spark ecosystem. Hands-on experience with SQL and database management systems. Familiarity with Hadoop and big data technologies. Experience with cloud platforms like AWS, Azure, or Google Cloud. Understanding of data modeling and database design concepts. Proficient in Python for data manipulation and analysis. Strong analytical and troubleshooting skills. Excellent communication and teamwork abilities. Knowledge of data governance and compliance practices. Experience in implementing ETL and data warehousing solutions. Able to work in a fast-paced environment and manage multiple priorities. Familiarity with version control systems (e.g., Git). Willingness to learn and adapt to new technologies. This is an exciting opportunity to work with a forward-thinking company that values expertise and innovation in the field of data engineering. If you are passionate about data and eager to contribute to transformative projects, we encourage you to apply today. Skills: big data technologies,etl,database management systems,aws glue,problem-solving skills,google cloud,version control systems,sql,data modeling,aws,azure,pyspark,data engineer,data warehousing,data governance,python,git,hadoop Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Overview Viraaj HR Solutions is a leading talent management firm dedicated to providing exceptional human resource services for businesses in various sectors. Our mission is to connect companies with the best talent while fostering a supportive and inclusive workplace culture. We value innovation, integrity, and excellence, ensuring that our clients thrive in a competitive market. Role Overview We are seeking a skilled PySpark Data Engineer to join our dynamic team in India. The ideal candidate will have a strong background in data engineering, with expertise in processing large volumes of data using PySpark. You will play a crucial role in designing and implementing data pipelines and ensuring efficient data storage and retrieval. This position requires strong analytical skills and the ability to work collaboratively in a fast-paced environment. Role Responsibilities Design, develop, and maintain robust data pipelines using PySpark. Implement ETL processes for data extraction, transformation, and loading. Work with structured and unstructured data sources and ensure data quality. Optimize data processing workflows for better performance and efficiency. Collaborate with data scientists and analysts to understand data requirements. Create and manage data warehousing solutions for end-user access. Integrate data from various sources into a unified platform. Monitor and troubleshoot data pipeline performance issues. Conduct data validation and cleansing to maintain data accuracy. Document processes and workflows for team transparency. Stay updated on new technologies and trends in data engineering. Participate in code reviews and ensure best practices in data engineering. Assist in technical designs and architecture for data solutions. Provide technical support and training to junior staff. Contribute to strategic discussions around data architecture and management. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. Proven experience as a Data Engineer or similar role. Strong proficiency in PySpark and the Spark ecosystem. Hands-on experience with SQL and database management systems. Familiarity with Hadoop and big data technologies. Experience with cloud platforms like AWS, Azure, or Google Cloud. Understanding of data modeling and database design concepts. Proficient in Python for data manipulation and analysis. Strong analytical and troubleshooting skills. Excellent communication and teamwork abilities. Knowledge of data governance and compliance practices. Experience in implementing ETL and data warehousing solutions. Able to work in a fast-paced environment and manage multiple priorities. Familiarity with version control systems (e.g., Git). Willingness to learn and adapt to new technologies. This is an exciting opportunity to work with a forward-thinking company that values expertise and innovation in the field of data engineering. If you are passionate about data and eager to contribute to transformative projects, we encourage you to apply today. Skills: big data technologies,etl,database management systems,aws glue,problem-solving skills,google cloud,version control systems,sql,data modeling,aws,azure,pyspark,data engineer,data warehousing,data governance,python,git,hadoop Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Overview Viraaj HR Solutions is a forward-thinking recruitment agency specializing in connecting talent with opportunities across various industries. Our mission is to empower individuals through meaningful employment, while fostering growth for businesses through innovative talent acquisition strategies. We value integrity, collaboration, and excellence in our operations. As part of our commitment to delivering exceptional HR solutions, we are currently seeking an Azure Data Engineer to join our client on-site in India. Role Responsibilities Design and implement data solutions on Microsoft Azure. Develop ETL processes to extract, transform, and load data efficiently. Perform data modeling and database design to support analytics and reporting. Collaborate with stakeholders to gather requirements and translate them into technical specifications. Optimize existing data pipelines for performance and reliability. Ensure data integrity and consistency through robust validation checks. Maintain and troubleshoot data integration processes. Implement data governance and security best practices. Work with big data technologies to manage large data sets. Document all technical processes and data architecture. Utilize Azure Data Factory and other Azure services for data management. Conduct performance tuning of SQL queries and data flows. Participate in design reviews and code reviews. Stay current with Azure updates and analyze their potential impact on existing solutions. Provide support and training to junior data engineers. Qualifications Bachelor's degree in Computer Science or a related field. 3+ years of experience in data engineering or a related role. Proficiency in Azure Data Factory and Azure SQL Database. Strong knowledge of SQL and relational databases. Experience with ETL tools and processes. Familiarity with data warehousing concepts. Hands-on experience with big data technologies like Hadoop or Spark. Knowledge of Python or other scripting languages. Understanding of data modeling concepts and techniques. Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities. Ability to work independently and as part of a team. Experience with data governance practices. Certifications in Azure or data engineering are a plus. Familiarity with Agile methodologies. Skills: big data technologies,azure data engineer,agile methodologies,relational databases,etl processes,sql server,python scripting,database design,azure databricks,sql,data modeling,microsoft azure,spark,azure data factory,data warehousing,data governance,python,hadoop,adf Show more Show less
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Overview Viraaj HR Solutions is a leading talent management firm dedicated to providing exceptional human resource services for businesses in various sectors. Our mission is to connect companies with the best talent while fostering a supportive and inclusive workplace culture. We value innovation, integrity, and excellence, ensuring that our clients thrive in a competitive market. Role Overview We are seeking a skilled PySpark Data Engineer to join our dynamic team in India. The ideal candidate will have a strong background in data engineering, with expertise in processing large volumes of data using PySpark. You will play a crucial role in designing and implementing data pipelines and ensuring efficient data storage and retrieval. This position requires strong analytical skills and the ability to work collaboratively in a fast-paced environment. Role Responsibilities Design, develop, and maintain robust data pipelines using PySpark. Implement ETL processes for data extraction, transformation, and loading. Work with structured and unstructured data sources and ensure data quality. Optimize data processing workflows for better performance and efficiency. Collaborate with data scientists and analysts to understand data requirements. Create and manage data warehousing solutions for end-user access. Integrate data from various sources into a unified platform. Monitor and troubleshoot data pipeline performance issues. Conduct data validation and cleansing to maintain data accuracy. Document processes and workflows for team transparency. Stay updated on new technologies and trends in data engineering. Participate in code reviews and ensure best practices in data engineering. Assist in technical designs and architecture for data solutions. Provide technical support and training to junior staff. Contribute to strategic discussions around data architecture and management. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. Proven experience as a Data Engineer or similar role. Strong proficiency in PySpark and the Spark ecosystem. Hands-on experience with SQL and database management systems. Familiarity with Hadoop and big data technologies. Experience with cloud platforms like AWS, Azure, or Google Cloud. Understanding of data modeling and database design concepts. Proficient in Python for data manipulation and analysis. Strong analytical and troubleshooting skills. Excellent communication and teamwork abilities. Knowledge of data governance and compliance practices. Experience in implementing ETL and data warehousing solutions. Able to work in a fast-paced environment and manage multiple priorities. Familiarity with version control systems (e.g., Git). Willingness to learn and adapt to new technologies. This is an exciting opportunity to work with a forward-thinking company that values expertise and innovation in the field of data engineering. If you are passionate about data and eager to contribute to transformative projects, we encourage you to apply today. Skills: big data technologies,etl,database management systems,aws glue,problem-solving skills,google cloud,version control systems,sql,data modeling,aws,azure,pyspark,data engineer,data warehousing,data governance,python,git,hadoop Show more Show less
Posted 1 week ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Company Overview Viraaj HR Solutions is a leading talent management firm dedicated to providing exceptional human resource services for businesses in various sectors. Our mission is to connect companies with the best talent while fostering a supportive and inclusive workplace culture. We value innovation, integrity, and excellence, ensuring that our clients thrive in a competitive market. Role Overview We are seeking a skilled PySpark Data Engineer to join our dynamic team in India. The ideal candidate will have a strong background in data engineering, with expertise in processing large volumes of data using PySpark. You will play a crucial role in designing and implementing data pipelines and ensuring efficient data storage and retrieval. This position requires strong analytical skills and the ability to work collaboratively in a fast-paced environment. Role Responsibilities Design, develop, and maintain robust data pipelines using PySpark. Implement ETL processes for data extraction, transformation, and loading. Work with structured and unstructured data sources and ensure data quality. Optimize data processing workflows for better performance and efficiency. Collaborate with data scientists and analysts to understand data requirements. Create and manage data warehousing solutions for end-user access. Integrate data from various sources into a unified platform. Monitor and troubleshoot data pipeline performance issues. Conduct data validation and cleansing to maintain data accuracy. Document processes and workflows for team transparency. Stay updated on new technologies and trends in data engineering. Participate in code reviews and ensure best practices in data engineering. Assist in technical designs and architecture for data solutions. Provide technical support and training to junior staff. Contribute to strategic discussions around data architecture and management. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. Proven experience as a Data Engineer or similar role. Strong proficiency in PySpark and the Spark ecosystem. Hands-on experience with SQL and database management systems. Familiarity with Hadoop and big data technologies. Experience with cloud platforms like AWS, Azure, or Google Cloud. Understanding of data modeling and database design concepts. Proficient in Python for data manipulation and analysis. Strong analytical and troubleshooting skills. Excellent communication and teamwork abilities. Knowledge of data governance and compliance practices. Experience in implementing ETL and data warehousing solutions. Able to work in a fast-paced environment and manage multiple priorities. Familiarity with version control systems (e.g., Git). Willingness to learn and adapt to new technologies. This is an exciting opportunity to work with a forward-thinking company that values expertise and innovation in the field of data engineering. If you are passionate about data and eager to contribute to transformative projects, we encourage you to apply today. Skills: big data technologies,etl,database management systems,aws glue,problem-solving skills,google cloud,version control systems,sql,data modeling,aws,azure,pyspark,data engineer,data warehousing,data governance,python,git,hadoop Show more Show less
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Overview Viraaj HR Solutions is a leading talent management firm dedicated to providing exceptional human resource services for businesses in various sectors. Our mission is to connect companies with the best talent while fostering a supportive and inclusive workplace culture. We value innovation, integrity, and excellence, ensuring that our clients thrive in a competitive market. Role Overview We are seeking a skilled PySpark Data Engineer to join our dynamic team in India. The ideal candidate will have a strong background in data engineering, with expertise in processing large volumes of data using PySpark. You will play a crucial role in designing and implementing data pipelines and ensuring efficient data storage and retrieval. This position requires strong analytical skills and the ability to work collaboratively in a fast-paced environment. Role Responsibilities Design, develop, and maintain robust data pipelines using PySpark. Implement ETL processes for data extraction, transformation, and loading. Work with structured and unstructured data sources and ensure data quality. Optimize data processing workflows for better performance and efficiency. Collaborate with data scientists and analysts to understand data requirements. Create and manage data warehousing solutions for end-user access. Integrate data from various sources into a unified platform. Monitor and troubleshoot data pipeline performance issues. Conduct data validation and cleansing to maintain data accuracy. Document processes and workflows for team transparency. Stay updated on new technologies and trends in data engineering. Participate in code reviews and ensure best practices in data engineering. Assist in technical designs and architecture for data solutions. Provide technical support and training to junior staff. Contribute to strategic discussions around data architecture and management. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. Proven experience as a Data Engineer or similar role. Strong proficiency in PySpark and the Spark ecosystem. Hands-on experience with SQL and database management systems. Familiarity with Hadoop and big data technologies. Experience with cloud platforms like AWS, Azure, or Google Cloud. Understanding of data modeling and database design concepts. Proficient in Python for data manipulation and analysis. Strong analytical and troubleshooting skills. Excellent communication and teamwork abilities. Knowledge of data governance and compliance practices. Experience in implementing ETL and data warehousing solutions. Able to work in a fast-paced environment and manage multiple priorities. Familiarity with version control systems (e.g., Git). Willingness to learn and adapt to new technologies. This is an exciting opportunity to work with a forward-thinking company that values expertise and innovation in the field of data engineering. If you are passionate about data and eager to contribute to transformative projects, we encourage you to apply today. Skills: big data technologies,etl,database management systems,aws glue,problem-solving skills,google cloud,version control systems,sql,data modeling,aws,azure,pyspark,data engineer,data warehousing,data governance,python,git,hadoop Show more Show less
Posted 1 week ago
10.0 - 15.0 years
0 Lacs
Greater Chennai Area
On-site
Customers trust the Alation Data Intelligence Platform for self-service analytics, cloud transformation, data governance, and AI-ready data, fostering data-driven innovation at scale. With more than $340M in funding – valued at over $1.7 billion and nearly 600 customers, including 40% of the Fortune 100 — Alation helps organizations realize value from data and AI initiatives. Alation has been recognized in 2024 as one of Inc. Magazine's Best Workplaces for the fifth time, a testament to our commitment to creating an inclusive, innovative, and collaborative environment. Collaboration is at the forefront of everything we do. We strive to bring diverse perspectives together and empower each team member to contribute their unique strengths to live out our values each day. These are: Move the Ball, Build for the Long Term, Listen Like You’re Wrong, and Measure Through Customer Impact. Joining Alation means being part of a fast-paced, high-growth company where every voice matters, and where we’re shaping the future of data intelligence with AI-ready data. Join us on our journey to build a world where data culture thrives and curiosity is celebrated each day! Job Description As a Manager/Sr Manager of Technical Support at Alation you will lead the day to day operations of a team of Technical Support Engineers. You are leading a customer-facing team as a key leader in the customer success organization. You will be responsible for directly monitoring, reporting, and driving improvements to team-level metrics and KPIs, acting as an escalation point with customers and internal teams, and optimizing and developing support processes and tools. Your work will be cross-functional and will involve working with engineering, QA, DevOps, product management, and sales. Location is Chennai (Hybrid Model). What You’ll Do Manage a team of senior-level Technical Support Engineers Develop capacity forecasts and resource allocation models to ensure proper coverage Drive the scaling, onboarding, and ongoing specialization of the team Implementing innovative process to increase support efficiency and increasing overall customer satisfaction Handle customer escalations and assist with troubleshooting and triaging incidents Manage the backlog and ensure that Support SLAs and KPIs are met Partner with Engineering & Product to prioritize issues and product improvements You Should Have 10-15 years of enterprise application support or operations experience, supporting customers in On Premise, Cloud, and Hybrid setups. Excellent communication skills, with a strong ability to discuss complex technical concepts with customers, engineers, and product managers Prior experience managing a team of frontline and senior-level Support Engineers Solid understanding of data platforms, data management, analytics or the BI space Excellent communication skills, with a strong ability to discuss complex technical concepts with customers, engineers, and product managers Self-starter with strong creative problem-solving, facilitation and interpersonal skills First-hand leadership experience working in a global organization and partnering with regional managers and leads to ensure a seamless customer experience Experience troubleshooting Linux and running shell commands Understanding of Relational Databases, such Oracle and Postgres. SQL is a must. A big plus if you have experience in the following areas: Postgres (DB internals) Elasticsearch, NoSQL, MongoDB Hadoop Ecosystem (Hive, HBase) Cloud technologies and frameworks such as Kubernetes and Docker Experience scoping or building tooling to improve the support experience Alation, Inc. is an Equal Employment Opportunity employer. All qualified applicants will receive consideration for employment without regards to that individual’s race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, veteran status, genetic information, ethnicity, citizenship, or any other characteristic protected by law. The Company will strive to provide reasonable accommodations to permit qualified applicants who have a need for an accommodation to participate in the hiring process (e.g., accommodations for a job interview) if so requested. This company participates in E-Verify. Click on any of the links below to view or print the full poster. E-Verify and Right to Work. #LI-Hybrid #LI-SR1 Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design and implement scalable, efficient, and secure data pipelines on GCP, utilizing tools such as BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud Storage. Collaborate with cross-functional teams (data scientists, analysts, and software engineers) to understand business requirements and deliver actionable data solutions. Develop and maintain ETL/ELT processes to ingest, transform, and load data from various sources into GCP-based data warehouses. Build and manage data lakes and data marts on GCP to support analytics and business intelligence initiatives. Implement automated data quality checks, monitoring, and alerting systems to ensure data integrity. Optimize and tune performance for large-scale data processing jobs in BigQuery, Dataflow, and other GCP tools. Create and maintain data pipelines to collect, clean, and transform data for analytics and machine learning purposes. Ensure data governance and compliance with organizational policies, including data security, privacy, and access controls. Stay up to date with new GCP services and features and make recommendations for improvements and new implementations. Mandatory Skill Sets GCP, Big query , Data Proc Preferred Skill Sets GCP, Big query , Data Proc, Airflow Years Of Experience Required 4-7 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 18 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Key Accountabilities JOB DESCRIPTION Collaborate with cross-functional teams (e.g., data scientists, software engineers, product managers) to define ML problems and objectives. Research, design, and implement machine learning algorithms and models (e.g., supervised, unsupervised, deep learning, reinforcement learning). Analyse and preprocess large-scale datasets for training and evaluation. Train, test, and optimize ML models for accuracy, scalability, and performance. Deploy ML models in production using cloud platforms and/or MLOps best practices. Monitor and evaluate model performance over time, ensuring reliability and robustness. Document findings, methodologies, and results to share insights with stakeholders. Qualifications, Experience And Skills Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field (graduation within the last 12 months or upcoming). Proficiency in Python or a similar language, with experience in frameworks like TensorFlow, PyTorch, or Scikit-learn. Strong foundation in linear algebra, probability, statistics, and optimization techniques. Familiarity with machine learning algorithms (e.g., decision trees, SVMs, neural networks) and concepts like feature engineering, overfitting, and regularization. Hands-on experience working with structured and unstructured data using tools like Pandas, SQL, or Spark. Ability to think critically and apply your knowledge to solve complex ML problems. Strong communication and collaboration skills to work effectively in diverse teams. Additional Skills (Good To Have) Experience with cloud platforms (e.g., AWS, Azure, GCP) and MLOps tools (e.g., MLflow, Kubeflow). Knowledge of distributed computing or big data technologies (e.g., Hadoop, Apache Spark). Previous internships, academic research, or projects showcasing your ML skills. Familiarity with deployment frameworks like Docker and Kubernetes. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Summary As a Staff Machine Learning Engineer, you will play a crucial role in bridging the gap between data science and production, ensuring the seamless integration and deployment of machine learning models into operational systems. You will be responsible for designing, implementing, and managing the infrastructure and workflows necessary to deploy, monitor, and maintain machine learning models at scale. GE HealthCare is a leading global medical technology and digital solutions innovator. Our purpose is to create a world where healthcare has no limits. Unlock your ambition, turn ideas into world-changing realities, and join an organization where every voice makes a difference, and every difference builds a healthier world. Responsibilities Job Description Model Deployment and Integration: Collaborate with data scientists to optimize, package and deploy machine learning models into production environments efficiently and reliably. Infrastructure Design and Maintenance: Design, build, and maintain scalable and robust infrastructure for model deployment, monitoring, and management. This includes containerization, orchestration, and automation of deployment pipelines. Continuous Integration/Continuous Deployment (CI/CD): Implement and manage CI/CD pipelines for automated model training, testing, and deployment. Model Monitoring and Performance Optimization: Develop monitoring and alerting systems to track the performance of deployed models and identify anomalies or degradation in real-time. Implement strategies for model retraining and optimization. Data Management and Version Control: Establish processes and tools for managing data pipelines, versioning datasets, and tracking changes in model configurations and dependencies. Security and Compliance: Ensure the security and compliance of deployed models and associated data. Implement best practices for data privacy, access control, and regulatory compliance. Documentation and Knowledge Sharing: Document deployment processes, infrastructure configurations, and best practices. Provide guidance and support to other team members on MLOps practices and tools. Collaboration and Communication: Collaborate effectively with cross-functional teams, including data scientists and business stakeholders. Communicate technical concepts and solutions to non-technical audiences. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, or related field. Excellent programming skills in languages such as Python. Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn). Proficiency in cloud platforms such as AWS, Azure and related services (e.g., AWS SageMaker, Azure ML). Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). Familiarity with DevOps practices and tools (e.g., Git, Jenkins, Terraform). Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Familiarity with software engineering principles and best practices (e.g., version control, testing, debugging). Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Ability to work effectively in a fast-paced and dynamic environment. Preferred Qualifications Experience with big data technologies (e.g., Hadoop, Spark). Knowledge of microservices architecture and distributed systems. Certification in relevant technologies or methodologies (e.g., AWS Certified Machine Learning Specialty, Kubernetes Certified Administrator). Experience with data engineering and ETL processes. Understanding of machine learning concepts and algorithms. Understanding of Large Language Models (LLM) and Foundation Models (FM). Certification in machine learning or related fields. Joining our team as a Staff ML Engineer offers an exciting opportunity to work at the intersection of data science, software engineering, and operations, contributing to the development and deployment of cutting-edge machine learning solutions. If you are passionate about leveraging technology to drive business value and thrive in a collaborative and innovative environment, we encourage you to apply. GE HealthCare is an Equal Opportunity Employer where inclusion matters. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. We expect all employees to live and breathe our behaviors: to act with humility and build trust; lead with transparency; deliver with focus, and drive ownership – always with unyielding integrity. Our total rewards are designed to unlock your ambition by giving you the boost and flexibility you need to turn your ideas into world-changing realities. Our salary and benefits are everything you’d expect from an organization with global strength and scale, and you’ll be surrounded by career opportunities in a culture that fosters care, collaboration and support. Additional Information Relocation Assistance Provided: No Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Pune
Remote
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what’s next for their businesses. Your Role Use Design thinking and a consultative approach to conceive cutting edge technology solutions for business problems, mining core Insights as a service model Engage with project activities across the Information lifecycle. Understanding client requirements, develop data analytics strategy and solution that meets client requirements Apply knowledge and explain the benefits to organizations adopting strategies relating to NextGen/ New age Data Capabilities Be proficient in evaluating new technologies and identifying practical business cases to develop enhanced business value and increase operating efficiency Architect large scale AI/ML products/systems impacting large scale clients across industry Own end to end solutioning and delivery of data analytics/transformation programs Mentor and inspire a team of data scientists and engineers solving AI/ML problems through R&D while pushing the state-of-the-art solution Liaise with colleagues and business leaders across Domestic & Global Regions to deliver impactful analytics projects and drive innovation at scale Assist sales team in reviewing RFPs, Tender documents, and customer requirements Developing high-quality and impactful demonstrations, proof of concept pitches, solution documents, presentations, and other pre-sales assets Have in-depth business knowledge across a breath of functional areas across sectors such as CPRD/FS/MALS/Utilities/TMT Your Profile B.E. / B.Tech. + MBA (Systems / Data / Data Science/ Analytics / Finance) with a good academic background Minimum 10 years + on Job experience in data analytics with at least 7 years of CPRD, FS, MALS, Utilities, TMT or other relevant domain experience required Specialization in data science, data engineering or advance analytics filed is strongly recommended Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision Good, applied statistics skills, such as distributions, statistical inference & testing, etc. Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). Proficient in coding in common data science language & tools such as R, Python, Go, SAS, Matlab etc. At least 7 years’ experience deploying digital and data science solutions on large scale project is required At least 7 years’ experience leading / managing a data Science team is required Exposure or knowledge in cloud (AWS/GCP/Azure) and big data technologies such as Hadoop, Hive What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of €22.5 billion.
Posted 1 week ago
5.0 years
0 Lacs
Pune
Remote
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what’s next for their businesses. Your Role Use Design thinking and a consultative approach to conceive cutting edge technology solutions for business problems, mining core Insights as a service model Contribute to the capacity of a Scrum Master/data business analyst to Data project Act as an agile coach, implementing and supporting agile principles, practices and rules of the scrum process and other rules that the team has agreed upon Knowledge of different framework like Scrum, Kanban, XP etc. Drive tactical, consistent team-level improvement as part of the scrum. Work closely with Product owner to get the priority by business value, work is aligned with objectives and drive the health of product backlog. Facilitate the scrum ceremonies and analyze the sprint report, burn down charts to identify the areas of improvement Support and coordinate system implementations through the project lifecycle working with other teams on a local and global basis. Engage with project activities across the Information lifecycle, often related to paradigms like -Building & managing Business data lakes and ingesting data streams to prepare data, Developing machine learning and predictive models to analyse data, Visualizing data, Specialize in Business Models and architectures across various Industry verticals. Your Profile Proven working experience as a Scrum Master/Data Business Analyst with overall experience of min 5 to 9+ years Preferably, person should have domain knowledge on CPRD/ FS/ MALS/ Utilities/ TMT Independently able to work with Product Management team and prepare Functional Analysis, User Stories Experience in technical writing skills to create BRD , FSD, Non-functional Requirement document, User Manual &Use Cases Specifications Comprehensive & solid experience of SCRUM as well as SDLC methodologies Experience in Azure DevOps/JIRA/Confluence or equivalent system and possess strong knowledge of the other agile frameworks Excellent meeting moderation and facilitation skills CSM/PSM 1 or 2 certified is mandate; SAFe 5.0 /6.0 is a plus Strong stakeholder management skills Good to have knowledge about big data ecosystem like Hadoop. Good to have working knowledge of R/Python language What you will love working about here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of €22.5 billion.
Posted 1 week ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Amazon Selection and Catalog Systems (ASCS) builds the systems that host and run the world’s largest e-Commerce products catalog. We power the online buying experience for customers worldwide so they can find, discover, and buy anything they want. Our massively scaled out distributed systems process hundreds of millions of updates on the billions of products across physical, digital, and services offerings. You will be part of Catalog Support Programs (CSP) team under Catalog Support Operations (CSO) in ASCS Org. CSP provides program management, technical support, and strategic initiatives to enhance the customer experience, owning the implementation of business logic and configurations for ASCS. We are establishing a new centralized Business Intelligence team to build self-service analytical products for ASCS that provide relevant insights and data deep dives across the business. By leveraging advanced analytics and AI/ML, we will transform catalog data into predictive insights, helping prevent customer issues before they arise. Real-time intelligence will support proactive decision-making, enabling faster, data-driven decisions across the organization and driving long-term growth and an enhanced customer experience. We are looking for an innovative, highly-motivated, and experienced Business Intelligence Engineer who can think holistically about problems to understand how systems work together. You will work closely with engineering teams, product managers, program managers, and organizational leaders to deliver end-to-end data solutions aimed at continuously enhancing overall ASCS business performance and delivery quality. As a Senior BIE, you will lead the data and reporting requirements for ASCS programs and projects. Your role will involve close engagement with senior leaders to generate insights and conduct deep dives into key metrics that directly influence organizational strategic decisions and priorities. You will demonstrate a high proficiency in complex SQL scripting, often combining various data sets from diverse sources. You will own the design, development, and maintenance of ongoing metrics, reports, dashboards, etc. to drive key business decisions. You will simplify and automate reporting, audits, and other data-driven activities. You will develop and drive best practices in data integrity, consistency, validations, and documentation. You will serve as a technical and analytical leader for the team, providing guidance and expertise to others on complex business and data challenges. Consistently deliver high-quality, timely results that demonstrate your deep subject matter expertise. This role requires an individual with excellent analytical abilities, deep knowledge of business intelligence solutions, as well as business acumen and the ability to work with various tech and product teams across ASCS. The ideal candidate should have excellent business and communication skills to work with business owners to define roadmaps, develop milestones, define key business questions, and build data sets that answer those questions. You should have hands-on SQL and scripting language experience, and excel in designing, implementing, and operating stable, scalable, low-cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. You will be instrumental in the creation of a reliable and scalable infrastructure for ongoing reporting and analytics. You will structure ambiguous problems and design analytics across various disciplines, resulting in actionable recommendations ranging from strategic planning, product strategy/launches, and engineering improvements. You will work closely with internal stakeholders to define key performance indicators (KPIs), implement them into dashboards and reports, and present insights in a concise and effective manner. This role will involve collaborating with business and tech leaders within ASCS and cross-functional teams to solve problems, create operational efficiencies, and deliver against high organizational standards. You should be able to apply a breadth of tools, data sources, and analytical techniques to answer a wide range of high-impact business questions and proactively uncover new insights that drive decision-making by senior leadership. As a key member of the CSP team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. Key job responsibilities Lead the design, implementation, and delivery of BI solutions for ASCS Manage and execute end-to-end projects, including stakeholder management, data gathering/manipulation, modeling, problem-solving, and communication of insights Design, build, and maintain automated reporting, dashboards, and ongoing analysis to enable data-driven decisions Report key insight trends using statistical rigor to inform the larger team of business-impacting developments Retrieve and analyze data using a broad set of Amazon's data technologies and resources Earn the trust of customers and stakeholders by understanding their needs and solving problems with technology Work closely with business stakeholders and senior leadership to review roadmaps and contribute to strategy Apply multi-domain expertise to own the end-to-end roadmap and analytical approach for complex problems Translate business requirements into analysis plans, review with stakeholders, and maintain high execution standards Proactively work with stakeholders to define use cases and standardized analytical outputs Scale data processes and reports through efficient query development and automation Demonstrate deep knowledge of available data sources to enable comparative and complex analyses Actively manage project timelines, communicate with stakeholders, and represent the team on initiatives Build and manage high-impact business review metrics, reports, and dashboards Provide BI solutions for loosely defined problems, deliver large-scale analytical projects, and highlight new opportunities Optimize code quality and BI processes to drive continuous improvement Extract, transform, and load data from multiple sources using SQL, scripting, and ETL tools A day in the life A day in the life of a BIE-III will include: Working closely with cross-functional teams including Product/Program Managers, Software Development Managers, Applied/Research/Data Scientists, and Software Developers. Lead the BIE team and own the execution of BIE projects. Building dashboards, performing root cause analysis, and sharing actionable insights with stakeholders to enable data-informed decision making Leading reporting and analytics initiatives to drive data-informed decision making Designing, developing, and maintaining ETL processes and data visualization dashboards using Amazon QuickSight Transforming complex business requirements into actionable analytics solutions Solving ambiguous analyses with less well-defined inputs and outputs, driving to the heart of the problem and identifying root causes Handling large data sets in analysis through the use of additional tools Deriving recommendations from analysis that significantly impact a department, create new processes, or change existing processes Understanding the basics of test and control comparison, and providing insights through basic statistical measures such as hypothesis testing Identifying and implementing optimal communication mechanisms based on the data set and the stakeholders involved Communicating complex analytical insights and business implications effectively About The Team This central BIE team within ASCS will be responsible for building a structured analytical data layer, bringing in BI discipline by defining metrics in a standardized way and establishing a single definition of metrics across the catalog ecosystem. They will also identify clear sources of truth for critical data. The team will build and maintain the data pipelines for critical projects tailored to the needs of ASCS teams, leveraging catalog data to provide a unified view of product information. This will support real-time decision-making and empower teams to make data-driven decisions quickly, driving innovation. This team will leverage advanced analytics that can shift us to a proactive, data-driven approach, enabling informed decisions that drive growth and enhance the customer experience. This team will adopt best practices, standardize metrics, and continuously iterate on queries and data sets as they evolve. Automated quality controls and real-time monitoring will ensure consistent data quality across the organization. Basic Qualifications 10+ years of professional or military experience 6+ years of SQL experience Experience programming to extract, transform and clean large (multi-TB) data sets Experience with theory and practice of design of experiments and statistical analysis of results Experience with AWS technologies Experience in scripting for automation (e.g. Python) and advanced SQL skills. Experience with theory and practice of information retrieval, data science, machine learning and data mining Experience in the data/BI space Knowledge of data warehousing and data modeling Demonstrate proficiency in SQL, data analysis, and data visualization tools like Amazon QuickSight/Tableau to drive data-driven decision making Experience with statistical analytics and programming languages (e.g., Python, Java, Ruby, R) and big data technologies/languages (e.g. Spark, Hive, Hadoop, PyTorch, PySpark) to build and maintain data pipelines and ETL processes Experience applying basic statistical methods (e.g. regression, t-test, Chi-squared) as well as exploratory, deterministic, and probabilistic analysis techniques to solve complex business problems. Track record of generating key business insights and collaborating with stakeholders. Experience working directly with business stakeholders to translate between data and business needs Superior verbal and written communication and presentation skills, experience working across functional teams and senior stakeholders. Track record of building automated, scalable analytical solutions Bachelors or Masters in Computer Science, Mathematics, Statistics, Operations Research, Data Science, Economics, Business Administration, or a similar related discipline Preferred Qualifications Experience working directly with business stakeholders to translate between data and business needs Experience managing, analyzing and communicating results to senior leadership Master's degree in statistics, data science, or an equivalent quantitative field Experience building measures and metrics, and developing reporting solutions Experience using Cloud Storage and Computing technologies such as AWS Redshift, S3, Hadoop, etc. Experience building and maintaining data pipelines and ETL processes Experience with statistical analysis, co-relation analysis, as well as exploratory, deterministic, and probabilistic analysis techniques Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2945833 Show more Show less
Posted 1 week ago
55.0 years
4 - 4 Lacs
Pune
Remote
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what’s next for their businesses. Your Role Engage with project activities across the Information lifecycle, often related to paradigms like -Building & managing Business data lakes and ingesting data streams to prepare data, Developing machine learning and predictive models to analyze data, Visualizing data , Empowering Information consumers with agile Data Models that enable Self-Service BI, Specialize in Business Models and architectures across various Industry verticals Participate in business requirements / functional specification definition, scope management, data analysis and design, in collaboration with both business stakeholders and IT teams, document detailed business requirements, develop solution design and specifications Your Profile B.E. / B.Tech. + MBA (Systems / Data / Data Science/ Analytics / Finance) with a good academic background Strong communication, facilitation, relationship-building, presentation, and negotiation skills Consultant must have a flair for storytelling and be able to present interesting insights from the data. Consultant should have good Soft skills like good communication, proactive, self-learning skills etc Consultants are expected to be flexible to the dynamically changing needs of the industry. Must have good exposure to Database management systems, Good to have knowledge about big data ecosystem like Hadoop. Hands on with SQL and good knowledge of noSQL based databases. Good to have working knowledge of R/Python language. Exposure to / Knowledge about one of the cloud ecosystems – Google / AWS/ Azure What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of €22.5 billion.
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Bengaluru
On-site
Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Solix Data Archival admin Key skills: Solix administration, experience of working with RDBMSes(Oracle/SQL/Postgres), writing SQL queries, experience of working on Unix(commands,basic scripting etc) Key responsibilities Install, configure and support Solix and its integration with Hadoop/other backend eco system. Analyze, support and onboard applications for the purpose of archiving into the platforms and technologies listed above. Integrate and publish the archiving operations with Citi’s standardized systems, such as CMDB, CSI, Collaborate etc Document and market the team’s services and accomplishments to the team’s client base. Develop user interfaces or templates to ease application on-boarding and customization. Integrate the solutions and operations with Tivoli, EERS and many others. Good interpersonal with excellent communication skills - written and spoken English. Able to interact with client projects in cross-functional teams. Good team player interested in sharing knowledge and cross-training other team members and shows interest in learning new technologies and products. Ability to create documents of high quality. Ability to work in a structured environment and follow procedures, processes, and policies. Self-starter who works with minimal supervision. Ability to work in a team of diverse skill sets and geographies. Job Description Role Purpose The purpose of the role is to resolve, maintain and manage client’s software/ hardware/ network based on the service requests raised from the end-user as per the defined SLA’s ensuring client satisfaction ͏ ͏ Mandatory Skills: Solix Data Archival. Experience: 3-5 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The demand for Hadoop professionals in India has been on the rise in recent years, with many companies leveraging big data technologies to drive business decisions. As a job seeker exploring opportunities in the Hadoop field, it is important to understand the job market, salary expectations, career progression, related skills, and common interview questions.
These cities are known for their thriving IT industry and have a high demand for Hadoop professionals.
The average salary range for Hadoop professionals in India varies based on experience levels. Entry-level Hadoop developers can expect to earn between INR 4-6 lakhs per annum, while experienced professionals with specialized skills can earn upwards of INR 15 lakhs per annum.
In the Hadoop field, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Architect or Big Data Engineer.
In addition to Hadoop expertise, professionals in this field are often expected to have knowledge of related technologies such as Apache Spark, HBase, Hive, and Pig. Strong programming skills in languages like Java, Python, or Scala are also beneficial.
As you navigate the Hadoop job market in India, remember to stay updated on the latest trends and technologies in the field. By honing your skills and preparing diligently for interviews, you can position yourself as a strong candidate for lucrative opportunities in the big data industry. Good luck on your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2