Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Surat, Gujarat, India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Surat, Gujarat, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Jaipur, Rajasthan, India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Greater Lucknow Area
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Greater Lucknow Area
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Thane, Maharashtra, India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Nashik, Maharashtra, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Thane, Maharashtra, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Nashik, Maharashtra, India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Delhi, India
On-site
Job Overview Position Name: Faculty, Wadhwani Center for Government Digital Transformation (WGDT) Work Location: Delhi About Wadhwani Foundation (www.wfglobal.org) Mission : Accelerating economic development in emerging economies through high-value job creation Objectives: Enabling the creation of 10M jobs and placement of 25M by 2030 across 20-25 emerging economies Wadhwani Foundation is a not-for-profit with the primary mission of accelerating economic development in emerging economies by driving large-scale job creation through entrepreneurship, innovation and skills development. Founded in 2000 by Silicon Valley entrepreneur, Dr. Romesh Wadhwani, today the Foundation is scaling impact in 25 countries across Asia, Africa, and Latin America through various Initiatives. More details on the various programs at the end of the document. Job Description Learning Strategy & Subject Matter Expertise Work in conjunction with the WGDT Academy team to decide subject matter and the best methodologies for training the target audiences (central and state government bureaucrats) Create the content on Emerging Technologies such as data science, machine learning, computer vision, natural language processing, Generative AI for a senior audience of government officials with relevant social sector examples and use cases. Help formulate case studies using no/low code tools for senior policymakers. Review the learning content as designed by the Curriculum designer to ensure accuracy and depth from the subject matter perspective Research, produce and deliver high-quality learning assets like training decks, facilitator guides, learner guides, assessments, and other supporting content Learning Delivery Demonstrate strong teaching skills for a senior audience in both a classroom and virtual classroom environment and be able to modify teaching styles accordingly Manage multiple teaching projects simultaneously and liaise with the stakeholders to execute course requirements Take full responsibility for assigned cohorts from a classroom set up, to group assignments, to learning intervention, and then on to data collection on usage, assessment, quality, feedback, etc. Be able to collate and illustrate points using the flipped classroom and case study methodology, as per the major requirements of adult learning Identify and address individual learner requirements so that there is “no student left behind”, which includes follow-ups for assignments, assessments, and feedback to and from learners Demonstrate excellent stakeholder relationship management skills Use all modern communication tools like Teams, Zoom, or other learning platforms as might be required She/he has experience in both in-person and online training for a senior audience. Requirements You have at least 7 years of experience You have at least 3 years of experience in the emerging technology as trainer (freelance or full time) You possess awareness and deep knowledge of the subject area including latest analytics based technologies You can instruct senior-level learners, with a talent for effectively engaging adult students of diverse ages and backgrounds. You have competency in teach technical subjects to a non-technical audience, using simple language and avoiding excessive jargon. Work in governance and policy will be an asset but is not essential Effective verbal communication skills Technical Skills Expert level knowledge of one or more of the Emerging Technologies such as data science, machine learning, computer vision, natural language processing, Generative AI and large language models Knowledge of a no/low code tools like Orange/Knime is helpful (but not essential) Knowledge of Python/ R is helpful (but not essential) Ability to handle and engage a heterogeneous participant base with maturity Experience in using and creating content for Virtual Learning platforms, MOOCs Experience in building new case studies, use cases and assessments in emerging technology areas At least a Bachelors’ degree Show more Show less
Posted 2 weeks ago
3.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who we are About Stripe Stripe is a financial infrastructure platform for businesses. Millions of companies—from the world’s largest enterprises to the most ambitious startups—use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyone’s reach while doing the most important work of your career. About The Team The Stripe Product Accounting Team is responsible for supporting all products at Stripe, consulting on accounting implications and supporting teams' ability to make informed strategic decisions. We are responsible for the accurate and timely recording of all business-generated transactions on our balance sheet and income statement. We operate in a fast-paced environment and collaborate significantly with cross-functional and international teams. What you’ll do Stripe is seeking an experienced accountant to join its world class Accounting team and help us scale for the future, in a fast-paced environment that is growing rapidly. In this role you will support our Payments and Payment Products revenue accounting, leveraging your technical expertise with US GAAP, specifically ASC 606, to quickly identify accounting implications and impacts to customers while advising and collaborating with team members cross-functionally to develop operational processes that help us scale. Responsibilities Partner closely with our product, go-to-market, and finance partners to thoroughly understand new products, features, and contracts. To leverage your familiarity with US GAAP, specifically ASC 606 (IFRS 15 acceptable), and in-depth understanding of the transaction level processes, to quickly identify the accounting implications of proposed contract terms and product design and collaborate with team members to provide solutions that meet all stakeholders’ objectives Partner closely with our product, go-to-market, and finance partners to thoroughly understand new products, features, and contracts You will be responsible for documenting the Company’s accounting positions and communicating them to varying levels in the organization Partner with internal systems and engineering teams to support internal financial systems and automation of accounting processes Assess the effectiveness of internal controls, and design new processes and controls for emerging and growing business activities Develop and maintain up-to-date accounting procedural documentation Understand balance sheet reconciliations, variance analyses, financial reporting deliverables, and perform analytical reviews Produce internal management analyses and reporting Support external audit processes Who you are We're looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement. Minimum Requirements An accountancy qualification (e.g. CA, CPA, ACA, ACCA, CIMA) with 3 - 6 years of relevant accounting experience Degree in Accounting or Finance background Working knowledge of US GAAP - ASC 606 (or IFRS 15) The ability to bring structure to ambiguous areas of opportunity and thrive in a fast-moving environment Strong analytical skills and strong knowledge of Google Sheets / Excel An enthusiastic “roll up your sleeves” mentality A passion for creating new systems and solving problems via infrastructure and automation Demonstrated ability to work cross-functionally and with different cultures Demonstrated experience with internal controls Ability to be flexible and comfortable with changing requirements Preferred Qualifications CPA/ CA or similar qualification Technical expertise with US GAAP, specifically ASC 606/IFRS 15 Knowledge of (or experience in) the technology or payments industry Experience working with Oracle Suite, SalesForce Experience with Hubble and SQL Experience with Analytical Tools like- Power BI, Tableau, KNIME, Python, Alteryx In-office expectations Office-assigned Stripes in most of our locations are currently expected to spend at least 50% of the time in a given month in their local office or with users. This expectation may vary depending on role, team and location. For example, Stripes in Stripe Delivery Center roles in Mexico City, Mexico and Bengaluru, India work 100% from the office. Also, some teams have greater in-office attendance requirements, to appropriately support our users and workflows, which the hiring manager will discuss. This approach helps strike a balance between bringing people together for in-person collaboration and learning from each other, while supporting flexibility when possible. Pay and benefits Stripe does not yet include pay ranges in job postings in every country. Stripe strongly values pay transparency and is working toward pay transparency globally. Show more Show less
Posted 2 weeks ago
180.0 years
0 Lacs
Goa, India
On-site
Job Location GOA PLANT Job Description P&G was founded over 180 years ago as a simple soap and candle company. Today, we're the world’s largest consumer goods company and home to iconic, trusted brands that make life a little bit easier in small but significant ways. We've spanned three centuries thanks to three simple ideas: leadership, innovation and citizenship. The insight, innovation and passion of hardworking teams has helped us grow into a global company that is governed responsibly and ethically, that is open and clear, and that supports good causes and protects the environment. This is a place where you can be proud to work and do something that matters. Dedication from Us: You will be at core of Ground- breaking innovations, be given exciting opportunities, lead initiatives, and take charge and responsibility, in creative workspaces where new insights thrive. All the while, you'll receive outstanding training to help you become a leader in your field. What we Offer: Continuous mentorship – work with peers and receive both formal training as well as day-to-day mentoring from your manager multifaceted and encouraging work environment– employees are at the centre, we value every individual and support initiatives, promoting agility and work/life balance. Overview Of The Job Provides an ideal place to work on ground-breaking upstream improvements related to the manufacturing and processing of our leading products with intelligent, connected technologies driving the 4th industrial revolution. Our aim is to ignite your potential and equip you to enhance the capability, safety, and efficiency of all our systems while reducing cost and boosting sustainability. Your team This role reports to business Dry Laundry global Platform leader. You will be working with project teams across Engineering, Manufacturing and Quality across the globe. How Success Looks Like Enabling savings every year on base budget as per the masterplan Delivering touchless digital transformation masterplan and reapplying across the globe. Responsibilities of the role (Product Supply Data Scientist) Provides technical leadership in supporting OU innovation projects. Acts as a key enabler in leading and delivering results against challenges. Works with innovation team looking at new platforms, machine control, data processing and analytics. Helps develop capability in others. Develop and plan required analytic projects in response to business needs. Leverage data science tools to tackle the toughest process problems in the region. Develop new analytics/predictive/prescriptive modeling methods and/or tools as required. Propose prescriptive analytic models to build robust and fault-tolerant process control strategies to reduce Operations Effort and improve product quality Work with process/equipment authorities and application developers to identify data relevant for analysis. Supply together with process/equipment owners and ITOT to the development and evolution of data models for analytical capabilities. Own data model and maintenance and development for hub-site Develop and maintain key data pipelines across selected sites globally Contribute to define work processes to deploy and maintain predictive/analytical modeling architectures, modeling standards, alarming and reporting, and data analysis methodologies. Conduct external focus research to drive suggestions on analytical modeling products, services, protocols, and standards that might support and speed-up the smart manufacturing journey. Identify, diagnose, and resolve prognostics model performance issue. Leverage Reliability Engineering improve with data science to develop new solutions to reduce losses. Job Qualifications Role Requirements Sufficient business knowledge to understand what data is important, when findings are relevant, and how to exploit data to make decisions. Strong familiarity with data preparation and processing. Strong analytical statistical and mathematical modeling capabilities to form hypothesis and to collect, explore, and extract insights from structure and unstructured data to rationally transform raw data into useful information to improve process/equipment operation and maintenance. Efficient use of software for data visualization, statistical analysis and ML, e.g., Python, R, SAS, Azure ML, Matlab, PowerBI, Tableau, and familiarity with functional programming and scripting languages, e.g., .NET, VB, C++, Python, Matlab. Understanding data architecture concepts, i.e., relational database structures, data warehouse, big data management, data queries, etc. Technical understanding of mechatronic systems / first principles (combining principles of process, mechanic, electrical, control and systems engineering) Experience in advance analytics, big data, machine learning. Experience in designing visualization to improve user experience and effective decision making Innovation Approach – Ability to assess business operations and find opportunities, then drive case Experience in data engineering using Azure Data Factory, Databricks, SQL, Knime Experience in Agile project (SCRUM, Kanban) and operate in DevOps, CI/CD environments IT Security & Risk – In touch with modern IT security risks and principles. Validated ability to handle concurrent priorities, strong written and verbal communication skills to influence others and act collaboratively across functions Collaboration attitude with clear technical communication skills to explain data models and data analytic solutions to other technical peers and to leadership teams. We produce globally recognized brands, and we grow the best business leaders in the industry. With a portfolio of trusted brands as diverse as ours, it is paramount our leaders can lead with courage the vast array of brands, categories, and functions. We serve consumers around the world with one of the strongest portfolios of trusted, quality, leadership brands, including Always®, Ariel®, Gillette®, Head & Shoulders®, Herbal Essences®, Oral-B®, Pampers®, Pantene®, Tampax® and more. Our community includes operations in approximately 70 countries worldwide. Visit http://www.pg.com to know more. We are an equal-opportunity employer and value diversity at our company. We do not discriminate against individuals based on race, color, gender, age, national origin, religion, sexual orientation, gender identity or expression, marital status, citizenship, disability, HIV/AIDS status, or any other legally protected factor. “At P&G, the hiring journey is personalized every step of the way, thereby ensuring equal opportunities for all, with a strong foundation of Ethics & Corporate Responsibility guiding everything we do. All the available job opportunities are posted either on our website - pgcareers.com, or on our official social media pages, for the convenience of prospective candidates, and do not require them to pay any kind of fees towards their application.” Job Schedule Full time Job Number R000132656 Job Segmentation Recent Grads/Entry Level (Job Segmentation) Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Lowe’s Lowe's Companies, Inc. (NYSE: LOW) is a FORTUNE® 50 home improvement company serving approximately 17 million customer transactions a week in the U.S. With total fiscal year 2022 sales of over $97 billion, approximately $92 billion of sales were generated in the U.S., where Lowe's operates over 1,700 home improvement stores and employs approximately 300,000 associates. Based in Mooresville, N.C., Lowe's supports the communities it serves through programs focused on creating safe, affordable housing and helping to develop the next generation of skilled trade experts. About Lowe’s India At Lowe's India, we are the enablers who help create an engaging customer experience for our $97 billion home improvement business at Lowe's. Our 4000+ associates work across technology, analytics, business operations, finance & accounting, product management, and shared services. We leverage new technologies and find innovative methods to ensure that Lowe's has a competitive edge in the market. About the Team The pricing Analytics team supports pricing managers and merchants in defining and optimizing the pricing strategies for various product categories across the channels .The team leverages advance analytics to forecast/measure the impact of pricing actions , develop strategic price zones, recommend price changes and identify sales/margin opportunities to achieve company targets . Job Summary: The primary purpose of this role is to develop and maintain descriptive and predictive analytics models and tools that support Lowe's pricing strategy. Collaborating closely with the Pricing team, the analyst will help translate pricing goals and objectives into data and analytics requirements. Utilizing both open source and commercial data science tools, the analyst will gather and wrangle data to deliver data driven insights, trends, and identify anomalies . The analyst will apply the most suitable statistical and machine learning techniques to answer relevant questions and provide retail recommendations . The analyst will actively collaborate with product and business team, incorporating feedback through out the development to drive continuous improvement and ensure a best-in-class position in the pricing space. Roles & Responsibilities: Core Responsibilities: Translate pricing strategy and business objectives into analytics requirements. Develop and implement processes for collecting, exploring, structuring, enhancing, and cleaning large datasets from both internal and external sources. Conduct data validation, detect outliers, and perform root cause analysis to prepare data for statistical and machine learning models. Research, design, and implement relevant statistical and machine learning models to solve specific business problems. Ensure the accuracy of data science and machine learning model results and build trust in their reliability. Apply machine learning model outcomes to relevant business use cases. Assist in designing and executing A/B tests, multivariate experiments, and randomized controlled trials (RCTs) to evaluate the effects of price changes. Perform advanced statistical analyses (e.g., causal inference, Bayesian analysis, regression modeling) to extract actionable insights from experimentation data. Collaborate with teams such as Pricing Strategy & Execution, Analytics COE, Merchandising, IT, and others to define, prioritize, and develop innovative solutions. Keep up to date with the latest developments in data science, statistics, and experimentation techniques. Automate routine manual processes to improve efficiency. Years of Experience: 3-6 years of relevant experience Education Qualification & Certifications (optional) Required Minimum Qualifications : Bachelor’s or Masters in Engineering/business analytics/Data Science/Statistics/economics/math Skill Set Required Primary Skills (must have) 3+ Years of experience in advance quantitative analysis , statistical modeling and Machine Learning. Ability to perform various analytical concepts like Regression, Sampling techniques, hypothesis, Segmentation, Time Series Analysis, Multivariate Statistical Analysis, Predictive Modelling. 3+ years’ experience in corporate Data Science, Analytics, Pricing & Promotions, Merchandising, or Revenue Management . 3+ years’ experience working with common analytics and data science software and technologies such as SQL, Python, R, or SAS. 3+ years’ experience working with Enterprise level databases ( e.g., Hadoop, Teradata, Oracle, DB2 ) 3+ years’ experience using enterprise-grade data visualization tools ( e.g., Power BI , Tableau ) 3+ years’ experience working with cloud platforms ( e.g., GCP, Azure ,AWS ) Secondary Skills (desired) Technical expertise in Alteryx, Knime. Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Delhi, India
On-site
Job Overview: Position Name: Faculty, Wadhwani Center for Government Digital Transformation (WGDT) Work Location: Delhi About Wadhwani Foundation ( www.wfglobal.org ): Mission : Accelerating economic development in emerging economies through high-value job creation Objectives: Enabling the creation of 10M jobs and placement of 25M by 2030 across 20-25 emerging economies Wadhwani Foundation is a not-for-profit with the primary mission of accelerating economic development in emerging economies by driving large-scale job creation through entrepreneurship, innovation and skills development. Founded in 2000 by Silicon Valley entrepreneur, Dr. Romesh Wadhwani, today the Foundation is scaling impact in 25 countries across Asia, Africa, and Latin America through various Initiatives. More details on the various programs at the end of the document. Job Description: Learning Strategy & Subject Matter Expertise · Work in conjunction with the WGDT Academy team to decide subject matter and the best methodologies for training the target audiences (central and state government bureaucrats) · Create the content on Emerging Technologies such as data science, machine learning, computer vision, natural language processing, Generative AI for a senior audience of government officials with relevant social sector examples and use cases. · Help formulate case studies using no/low code tools for senior policymakers. · Review the learning content as designed by the Curriculum designer to ensure accuracy and depth from the subject matter perspective · Research, produce and deliver high-quality learning assets like training decks, facilitator guides, learner guides, assessments, and other supporting content Learning Delivery · Demonstrate strong teaching skills for a senior audience in both a classroom and virtual classroom environment and be able to modify teaching styles accordingly · Manage multiple teaching projects simultaneously and liaise with the stakeholders to execute course requirements · Take full responsibility for assigned cohorts from a classroom set up, to group assignments, to learning intervention, and then on to data collection on usage, assessment, quality, feedback, etc. · Be able to collate and illustrate points using the flipped classroom and case study methodology, as per the major requirements of adult learning · Identify and address individual learner requirements so that there is “no student left behind”, which includes follow-ups for assignments, assessments, and feedback to and from learners · Demonstrate excellent stakeholder relationship management skills · Use all modern communication tools like Teams, Zoom, or other learning platforms as might be required · She/he has experience in both in-person and online training for a senior audience. Requirements You have at least 7 years of experience You have at least 3 years of experience in the emerging technology as trainer (freelance or full time) You possess awareness and deep knowledge of the subject area including latest analytics based technologies You can instruct senior-level learners, with a talent for effectively engaging adult students of diverse ages and backgrounds. You have competency in teach technical subjects to a non-technical audience, using simple language and avoiding excessive jargon. Work in governance and policy will be an asset but is not essential Effective verbal communication skills Technical skills: o Expert level knowledge of one or more of the Emerging Technologies such as data science, machine learning, computer vision, natural language processing, Generative AI and large language models o Knowledge of a no/low code tools like Orange/Knime is helpful (but not essential) o Knowledge of Python/ R is helpful (but not essential) o Ability to handle and engage a heterogeneous participant base with maturity o Experience in using and creating content for Virtual Learning platforms, MOOCs o Experience in building new case studies, use cases and assessments in emerging technology areas o At least a Bachelors’ degree Show more Show less
Posted 2 weeks ago
15.0 years
4 - 8 Lacs
Hyderābād
On-site
Job Title Quality Assurance Lead, AVP Role Summary & Role Description The candidate must have minimum 15 years and above experience in Financial Services, IT or relevant industry. This role is for a Senior Quality Assurance lead automation/functional tester. This role requires data validation on a complex Data Warehousing platform. Experince in Azure cloud and Snowflake is mandatory. Experience on Oracle/SQL Server is mandatory. Experience in Selenium using Java is mandatory. Minimum 15 years and above of experience in quality assurance role. Excellent understanding and hands on experience in ETL/DWH testing. Hands on experience on SQL (Analytical Functions and complex queries), SQL developer and TOAD. Good understanding of DevOps ecosystem and tools like GitHub, Jenkins and Maven. Should have excellent knowledge in SDLC process -include creating, maintaining and executing automation/functional test plans, writing test scripts against software platforms, communicating test results, defect tracking and participating in day to day QA activities. Experience in Financial/Banking domain is required. Experience in Selenium using Java is mandatory. Experince in Azure cloud and Snowflake is mandatory. Knowledge of test methodologies and their corresponding tools- like Jira, RTC…etc. Should have good understanding of Agile methodology and Agile Ceremonies. Experience with following tools are required: Squirrel, Toad, RTC, MS Office Tools - Excel, Access, Power Point, Project and Word, SQL Developer, Unix KSH and BASH. Experience in creating and executing automation script using Java. Ability to work both independently as well as within a team. Work with geographically distributed teams while maintaining highest standard in collaboration and communication across the QA/DEV and Business teams. Excellent verbal and written communication skills. Person must be self-motivated, self-driven, own and consider him/herself accountable for timely completion of QA deliverables. Strong problem-solving skills with great attention to details. Experience with Knime will be an added advantage. Core/Must have skills Automation testing, Functional testing. Selenium, Java, Azure cloud, Snowflake, Oracle Good to have skills Exposure to AI Work Schedule 11:00 AM – 8:00 PM (INSHIFT – 2) Keywords (If any)
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2