Jobs
Interviews

296 Knime Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Vellore, Tamil Nadu, India

Remote

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Madurai, Tamil Nadu, India

Remote

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Vellore, Tamil Nadu, India

Remote

Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Faridabad, Haryana, India

Remote

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Faridabad, Haryana, India

Remote

Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

3.0 - 14.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Position responsibilities and expectations Stakeholder facing role across different functions of the organizations (e.g., Marketing, Finance, Logistics, Procurement, Customer, Supply Chain, HR) to develop analyses that lead to actionable insights that accelerate profitable growth Working with bigdata, SAP, Oracle, Workday or several other large and complex databases Designing and building analytical / ML algorithms using Python, R and other statistical tools. Strong data representation and lucid presentation (of analysis/modelling output) using Python, R Markdown, Power Point, Excel etc. Ability to learn new scripting language or analytics platform. Technical Skills Required (must Have) Exposure to Generative AI Strong knowledge of statistical and data mining techniques like Linear & Logistic Regression analysis, Decision trees, Bagging, Boosting, Time Series and Non-parametric analysis. Strong knowledge of DL & Neural Network Architectures (CNN, RNN, LSTM, Transformers etc.) Strong knowledge of SQL and R/Python and experience with distribute data/computing tools. Experience in advanced Text Analytics (NLP, NLU, NLG). Strong hands-on experience of end-to-end statistical model development and implementation Basic understanding of MLOps for scalable ML development. Expert level proficiency algorithm building languages like SQL, R and Python and data visualization tools like Shiny, Qlik, Power BI etc. Exposure to Cloud technologies (Azure or AWS or GCP) Technical Skills Required (Any One Or More) Experience in video/ image analytics Experience in IoT/ machine logs data analysis Exposure to data analytics platforms like Domino Data Lab, c3.ai, H2O, Alteryx or KNIME Expertise in Cloud analytics platforms (Azure, AWS or Google) Experience in Process Mining with expertise in Celonis or other tools Proven capability in using Generaitive AI services like OpenAI, Google Proven capability in building customized models from open source distributions like Llama, Stable Diffusion Consulting Skills Required Excellent Comprehension, Written and Oral Communication skill Positive, people-oriented, and energetic attitude Analytical, creative, and innovative approach to solving problems Strong written and verbal communication Ability to work both independently and as a part of diverse teams Degrees/Certifications Earned Ph.D/ M.Sc. / M.Stat/ MS (Statistics / Mathematics/ Economics/ Computer Science or ML/DL focus) B.Tech / M.Tech (Computer Science, Mathematics & Scientific Computing etc.) Others 3 to 14 years' experience in Analytics modeling Open for domestic and international travels Mandatory Skills: Datascience Preferred Skills: Datascience Years of Experience: 10-15 Qualifications: BTech Education (if blank, degree and/or field of study not specified) Degrees/Field Of Study Required Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Science Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Bhubaneswar, Odisha, India

Remote

Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Cuttack, Odisha, India

Remote

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Cuttack, Odisha, India

Remote

Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Bhubaneswar, Odisha, India

Remote

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Guwahati, Assam, India

Remote

Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Guwahati, Assam, India

Remote

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Raipur, Chhattisgarh, India

Remote

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Raipur, Chhattisgarh, India

Remote

Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Ranchi, Jharkhand, India

Remote

Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Jamshedpur, Jharkhand, India

Remote

Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Amritsar, Punjab, India

Remote

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Amritsar, Punjab, India

Remote

Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, Python, Tableau, OOPs, KNIME, Data Integrity, QA Experience, ETL tools, Leadership Forbes Advisor is Looking for: Job Description: Data Integrity Manager Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. We bring rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Integrity Team is a brand-new team with the purpose of ensuring all primary, publicly accessible data collected by our researchers is correct and accurate, allowing the insights produced from this data to be reliable. They collaborate with other teams while also operating independently. Their responsibilities include monitoring data researched to ensure that errors are identified and caught as soon as possible, creating detective skills for looking for issues and mending them, setting up new configurations and ensuring they are correct, testing new developments to guarantee data quality is not compromised, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance, playing a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Integrity Manager will involve guiding team members through their tasks whilst looking for the next set of possible problems. They should understand about how to automate systems, optimization techniques, and best practices in debugging, testing and looking for issues. They work closely with other team members, offering technical mentorship, as well as advanced Python, SQL and data visualization practices. Responsibilities: Technical Mentorship and Code Quality: Mentor team members on coding standards, optimization, and debugging while conducting code and report reviews to enforce high code quality. Provide constructive feedback and enforce quality standards. Testing and Quality Assurance Leadership: Lead the development and implementation of rigorous testing protocols to ensure project reliability and advocate for automated test coverage. Process Improvement and Documentation: Establish and refine standards for version control, documentation, and task tracking to improve productivity and data quality. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Provide expert troubleshooting support in Python, MySQL, GitKraken, Tableau and Knime, helping the team resolve complex technical issues. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including best practices, data visualization and advanced Python programming. Guide the team in building scalable and reliable solutions to continual track and monitor data quality. Cross-Functional Collaboration: Partner with data scientists, product managers, and data engineers to align data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement and maintain scalable data pipelines for efficient data transfer, transformation, and visualization in production environments. Skills and Experience: Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Data Integrity & Validation Experience: Strong ability to assess, validate, and ensure the integrity of large datasets with experience in identifying data inconsistencies, anomalies, and patterns that indicate data quality issues. Proficient in designing and implementing data validation frameworks. Analytical & Problem-Solving Mindset: Critical thinking with a habit of asking "why"—why anomalies exist, why trends deviate, and what underlying factors are at play. Strong diagnostic skills to identify root causes of data issues and propose actionable solutions. Ability to work with ambiguous data and derive meaningful insights. Attention to Detail: Meticulous attention to data nuances, capable of spotting subtle discrepancies. Strong focus on data accuracy, completeness, and consistency across systems. Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to understand modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Tableau. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Ability to clearly articulate technical challenges, propose effective solutions, and align cross-functional teams on project requirements, technical standards, and data workflows. Strong at conveying complex ideas to both technical and non-technical stakeholders, ensuring transparency and collaboration. Skilled in documenting data issues, methodologies, and technical workflows for knowledge sharing. Adaptability and Continuous Learning: Stay updated on data engineering trends and foster a culture of continuous learning and process evolution within the team. Data Pipelines: Hands-on experience in building, maintaining, and optimizing ETL/ELT pipelines, including data transfer, transformation, and visualization, for real-world applications. Strong understanding of data workflows and ability to troubleshoot pipeline issues quickly with the ability to automate repetitive data processes to improve efficiency and reliability. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Jamshedpur, Jharkhand, India

Remote

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies