Home
Jobs

2009 Redshift Jobs - Page 48

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

Remote

Linkedin logo

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: dBT tool, GCP, Affiliate Marketing Data, SQL, ETL, Team Handling, Data warehouse, Snowflake, BigQuery, Redshift Forbes Advisor is Looking for: Job Description: Data Warehouse Lead Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. Forbes Marketplace is seeking a highly skilled and experienced Data Warehouse engineer to lead the design, development, implementation, and maintenance of our enterprise data warehouse. We are seeking someone with a strong background in data warehousing principles, ETL/ELT processes and database technologies. You will be responsible for guiding a team of data warehouse developers and ensuring the data warehouse is robust, scalable, performant, and meets the evolving analytical needs of the organization. Responsibilities: Lead the design, development and maintenance of data models optimized for reporting and analysis. Ensure data quality, integrity, and consistency throughout the data warehousing process to enable reliable and timely ingestion of data from the source system. Troubleshoot and resolve issues related to data pipelines and data integrity. Work closely with business analysts and other stakeholders to understand their data needs and provide solutions. Communicate technical concepts to non-technical audiences effectively. Ensure the data warehouse is scalable to accommodate growing data volumes and user demands. Ensure adherence to data governance and privacy policies and procedures. Implement and monitor data quality metrics and processes. Lead and mentor a team of data warehouse developers, providing technical guidance and support. Stay up-to-date with the latest trends and technologies in data warehousing and business intelligence. Foster a collaborative and high-performing team environment. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7-8 years of progressive experience in data warehousing, with at least 3 years in a lead or senior role. Deep understanding of data warehousing concepts, principles and methodologies. Strong proficiency in SQL and experience with various database platforms (e.g., BigQuery, Redshift, Snowflake). Good understanding of Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc - the more the better). Hands-on experience with dbt and other ETL/ELT tools and technologies. Experience with data modeling techniques (e.g., dimensional modeling, star schema, snowflake schema). Experience with cloud-based data warehousing solutions (e.g., AWS, Azure, GCP) - GCP is highly preferred. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, presentation, and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Familiarity with business intelligence and reporting tools (e.g., Tableau, Power BI, Looker). Experience with data governance and data quality frameworks is a plus. Perks: Day off on the 3rd Friday of every month (one long weekend each month). Monthly Wellness Reimbursement Program to promote health well-being. Monthly Office Commutation Reimbursement Program. Paid paternity and maternity leaves. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

1.0 - 3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Developing ETL Pipelines: Designing, developing, and maintaining scalable and adaptable data pipelines using Python or PySpark to facilitate the smooth migration of data from diverse data sources . Host these ETL pipelines in AWS EC2, AWS Glue or AWS EMR and store this data to cloud database services like Google BigQuery, AWS S3, Redshift, RDS, Delta Lake etc. This includes managing significant data migrations and ensuring seamless transitions between systems. Implementing Data Quality Check Framework: Establishing and executing data quality checks and validation pipelines using different tools like Python, PySpark, Athena or BigQuery, S3, Delta Lake to uphold the integrity and accuracy of our datasets. Creating Mechanisms for Generating ETL Migration Status Reports: Devising a framework to generate concise summary reports detailing data migration progress, alongside promptly alerting stakeholders to any failures within ETL pipelines. This ensures swift resolution of data discrepancies arising from pipeline failures. Implement this using standard SMTP, Python, AWS SNS, AWS SES, AWS S3, Delta Lake etc services. Data Transformations and Processing: Implementing various data encryption and decryption techniques using Python and PySpark libraries, in addition to generating insightful reports and analyses derived from processed data to aid in informed business decision-making. Development of APIs: Building APIs using frameworks such as Flask or Django, incorporating diverse authentication and authorization techniques to safeguard the exchange of data. Host these API’s on EC2 server using services like Gearman etc or Write API logics in lambda and host these API’s using API Gateway services of cloud. Code Versioning and Deployment: Leveraging GitHub extensively for robust code versioning, deployment of the latest code iterations, seamless transitioning between different code versions, and merging various branches to streamline development and code release processes. Automation: Designing and implementing code automation solutions to streamline and automate manual tasks effectively. Required Candidate Profile Soft Skills Must Have Demonstrates adept problem-solving skills to efficiently address complex challenges encountered during data engineering tasks. Exhibits clear and effective communication skills, facilitating seamless collaboration and comprehension across diverse teams and stakeholders. Displays proficiency in both independent and collaborative work dynamics, fostering productivity and synergy within a fast-paced team environment. Demonstrates a high level of adaptability to changing requirements, customer dynamics, and work demands. Self-motivated and responsible individual who takes ownership and initiative in tasks. Good To Have Demonstrates project management experience, offering valuable insights and contributions towards efficient project execution and delivery. Good Presentation skills Excellent customer handling skills. Technical Skills Proficiency in SQL (Structured Query Language) for querying and manipulating databases. Experience with relational database systems like MySQL, PostgreSQL, or Oracle and NoSQL databases like Mongo. Proficiency in object-oriented programming concepts such as encapsulation, inheritance, and polymorphism. Knowledge of data warehousing concepts and experience with data warehousing solutions like Amazon Redshift, Google BigQuery, or Snowflake. Experienced in developing ETL pipelines using Python, PySpark. Knowledge of Python libraries/frameworks like Pandas, NumPy, or Spark for data processing and analysis. Familiarity with big data processing frameworks like Apache Hadoop and Apache Spark for handling large-scale datasets and performing distributed computing. Knowledge of cloud-based services like AWS S3, AWS Glue, AWS EMR, AWS Lambda, Athena, Azure Data Lake, Google BigQuery, etc. Familiarity with version control systems like Git for managing codebase changes, collaborating with team members, and maintaining code quality. Experience with web scraping libraries and frameworks like BeautifulSoup, Scrapy, Puppeteer, Selenium, etc., is highly beneficial. Knowledge of regular expressions is useful for pattern matching and extracting specific data formats from text. Understanding of HTTP protocols and how web servers respond to requests, how to send requests to web servers, handle responses, and manage sessions and cookies is essential. Familiarity with XPath expressions or CSS selectors is important for targeting specific elements within the HTML structure. Required Experience The ideal candidate will have a minimum of 1-3 years of relevant experience in data engineering roles, with a demonstrated history of successfully developing and maintaining ETL pipelines, handling big data migrations, and ensuring data quality and validation. Must have excellent knowledge and programing capability using Python, PySpark working on any of the Cloud Platforms like AWS, Azure or Google. Role Industry Type: Engineering Functional Area: Data Engineering, Software Development, Automation Employment Type: Full Time, Permanent Role Category: System Design/Implementation Education : A minimum educational requirement is graduation. Here at Havas across the group we pride ourselves on being committed to offering equal opportunities to all potential employees and have zero tolerance for discrimination. We are an equal opportunity employer and welcome applicants irrespective of age, sex, race, ethnicity, disability and other factors that have no bearing on an individual’s ability to perform their job. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: dBT tool, GCP, Affiliate Marketing Data, SQL, ETL, Team Handling, Data warehouse, Snowflake, BigQuery, Redshift Forbes Advisor is Looking for: Job Description: Data Warehouse Lead Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. Forbes Marketplace is seeking a highly skilled and experienced Data Warehouse engineer to lead the design, development, implementation, and maintenance of our enterprise data warehouse. We are seeking someone with a strong background in data warehousing principles, ETL/ELT processes and database technologies. You will be responsible for guiding a team of data warehouse developers and ensuring the data warehouse is robust, scalable, performant, and meets the evolving analytical needs of the organization. Responsibilities: Lead the design, development and maintenance of data models optimized for reporting and analysis. Ensure data quality, integrity, and consistency throughout the data warehousing process to enable reliable and timely ingestion of data from the source system. Troubleshoot and resolve issues related to data pipelines and data integrity. Work closely with business analysts and other stakeholders to understand their data needs and provide solutions. Communicate technical concepts to non-technical audiences effectively. Ensure the data warehouse is scalable to accommodate growing data volumes and user demands. Ensure adherence to data governance and privacy policies and procedures. Implement and monitor data quality metrics and processes. Lead and mentor a team of data warehouse developers, providing technical guidance and support. Stay up-to-date with the latest trends and technologies in data warehousing and business intelligence. Foster a collaborative and high-performing team environment. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7-8 years of progressive experience in data warehousing, with at least 3 years in a lead or senior role. Deep understanding of data warehousing concepts, principles and methodologies. Strong proficiency in SQL and experience with various database platforms (e.g., BigQuery, Redshift, Snowflake). Good understanding of Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc - the more the better). Hands-on experience with dbt and other ETL/ELT tools and technologies. Experience with data modeling techniques (e.g., dimensional modeling, star schema, snowflake schema). Experience with cloud-based data warehousing solutions (e.g., AWS, Azure, GCP) - GCP is highly preferred. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, presentation, and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Familiarity with business intelligence and reporting tools (e.g., Tableau, Power BI, Looker). Experience with data governance and data quality frameworks is a plus. Perks: Day off on the 3rd Friday of every month (one long weekend each month). Monthly Wellness Reimbursement Program to promote health well-being. Monthly Office Commutation Reimbursement Program. Paid paternity and maternity leaves. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Agra, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: dBT tool, GCP, Affiliate Marketing Data, SQL, ETL, Team Handling, Data warehouse, Snowflake, BigQuery, Redshift Forbes Advisor is Looking for: Job Description: Data Warehouse Lead Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. Forbes Marketplace is seeking a highly skilled and experienced Data Warehouse engineer to lead the design, development, implementation, and maintenance of our enterprise data warehouse. We are seeking someone with a strong background in data warehousing principles, ETL/ELT processes and database technologies. You will be responsible for guiding a team of data warehouse developers and ensuring the data warehouse is robust, scalable, performant, and meets the evolving analytical needs of the organization. Responsibilities: Lead the design, development and maintenance of data models optimized for reporting and analysis. Ensure data quality, integrity, and consistency throughout the data warehousing process to enable reliable and timely ingestion of data from the source system. Troubleshoot and resolve issues related to data pipelines and data integrity. Work closely with business analysts and other stakeholders to understand their data needs and provide solutions. Communicate technical concepts to non-technical audiences effectively. Ensure the data warehouse is scalable to accommodate growing data volumes and user demands. Ensure adherence to data governance and privacy policies and procedures. Implement and monitor data quality metrics and processes. Lead and mentor a team of data warehouse developers, providing technical guidance and support. Stay up-to-date with the latest trends and technologies in data warehousing and business intelligence. Foster a collaborative and high-performing team environment. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7-8 years of progressive experience in data warehousing, with at least 3 years in a lead or senior role. Deep understanding of data warehousing concepts, principles and methodologies. Strong proficiency in SQL and experience with various database platforms (e.g., BigQuery, Redshift, Snowflake). Good understanding of Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc - the more the better). Hands-on experience with dbt and other ETL/ELT tools and technologies. Experience with data modeling techniques (e.g., dimensional modeling, star schema, snowflake schema). Experience with cloud-based data warehousing solutions (e.g., AWS, Azure, GCP) - GCP is highly preferred. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, presentation, and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Familiarity with business intelligence and reporting tools (e.g., Tableau, Power BI, Looker). Experience with data governance and data quality frameworks is a plus. Perks: Day off on the 3rd Friday of every month (one long weekend each month). Monthly Wellness Reimbursement Program to promote health well-being. Monthly Office Commutation Reimbursement Program. Paid paternity and maternity leaves. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Agra, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: dBT tool, GCP, Affiliate Marketing Data, SQL, ETL, Team Handling, Data warehouse, Snowflake, BigQuery, Redshift Forbes Advisor is Looking for: Job Description: Data Warehouse Lead Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. Forbes Marketplace is seeking a highly skilled and experienced Data Warehouse engineer to lead the design, development, implementation, and maintenance of our enterprise data warehouse. We are seeking someone with a strong background in data warehousing principles, ETL/ELT processes and database technologies. You will be responsible for guiding a team of data warehouse developers and ensuring the data warehouse is robust, scalable, performant, and meets the evolving analytical needs of the organization. Responsibilities: Lead the design, development and maintenance of data models optimized for reporting and analysis. Ensure data quality, integrity, and consistency throughout the data warehousing process to enable reliable and timely ingestion of data from the source system. Troubleshoot and resolve issues related to data pipelines and data integrity. Work closely with business analysts and other stakeholders to understand their data needs and provide solutions. Communicate technical concepts to non-technical audiences effectively. Ensure the data warehouse is scalable to accommodate growing data volumes and user demands. Ensure adherence to data governance and privacy policies and procedures. Implement and monitor data quality metrics and processes. Lead and mentor a team of data warehouse developers, providing technical guidance and support. Stay up-to-date with the latest trends and technologies in data warehousing and business intelligence. Foster a collaborative and high-performing team environment. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7-8 years of progressive experience in data warehousing, with at least 3 years in a lead or senior role. Deep understanding of data warehousing concepts, principles and methodologies. Strong proficiency in SQL and experience with various database platforms (e.g., BigQuery, Redshift, Snowflake). Good understanding of Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc - the more the better). Hands-on experience with dbt and other ETL/ELT tools and technologies. Experience with data modeling techniques (e.g., dimensional modeling, star schema, snowflake schema). Experience with cloud-based data warehousing solutions (e.g., AWS, Azure, GCP) - GCP is highly preferred. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, presentation, and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Familiarity with business intelligence and reporting tools (e.g., Tableau, Power BI, Looker). Experience with data governance and data quality frameworks is a plus. Perks: Day off on the 3rd Friday of every month (one long weekend each month). Monthly Wellness Reimbursement Program to promote health well-being. Monthly Office Commutation Reimbursement Program. Paid paternity and maternity leaves. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: dBT tool, GCP, Affiliate Marketing Data, SQL, ETL, Team Handling, Data warehouse, Snowflake, BigQuery, Redshift Forbes Advisor is Looking for: Job Description: Data Warehouse Lead Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. Forbes Marketplace is seeking a highly skilled and experienced Data Warehouse engineer to lead the design, development, implementation, and maintenance of our enterprise data warehouse. We are seeking someone with a strong background in data warehousing principles, ETL/ELT processes and database technologies. You will be responsible for guiding a team of data warehouse developers and ensuring the data warehouse is robust, scalable, performant, and meets the evolving analytical needs of the organization. Responsibilities: Lead the design, development and maintenance of data models optimized for reporting and analysis. Ensure data quality, integrity, and consistency throughout the data warehousing process to enable reliable and timely ingestion of data from the source system. Troubleshoot and resolve issues related to data pipelines and data integrity. Work closely with business analysts and other stakeholders to understand their data needs and provide solutions. Communicate technical concepts to non-technical audiences effectively. Ensure the data warehouse is scalable to accommodate growing data volumes and user demands. Ensure adherence to data governance and privacy policies and procedures. Implement and monitor data quality metrics and processes. Lead and mentor a team of data warehouse developers, providing technical guidance and support. Stay up-to-date with the latest trends and technologies in data warehousing and business intelligence. Foster a collaborative and high-performing team environment. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7-8 years of progressive experience in data warehousing, with at least 3 years in a lead or senior role. Deep understanding of data warehousing concepts, principles and methodologies. Strong proficiency in SQL and experience with various database platforms (e.g., BigQuery, Redshift, Snowflake). Good understanding of Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc - the more the better). Hands-on experience with dbt and other ETL/ELT tools and technologies. Experience with data modeling techniques (e.g., dimensional modeling, star schema, snowflake schema). Experience with cloud-based data warehousing solutions (e.g., AWS, Azure, GCP) - GCP is highly preferred. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, presentation, and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Familiarity with business intelligence and reporting tools (e.g., Tableau, Power BI, Looker). Experience with data governance and data quality frameworks is a plus. Perks: Day off on the 3rd Friday of every month (one long weekend each month). Monthly Wellness Reimbursement Program to promote health well-being. Monthly Office Commutation Reimbursement Program. Paid paternity and maternity leaves. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: dBT tool, GCP, Affiliate Marketing Data, SQL, ETL, Team Handling, Data warehouse, Snowflake, BigQuery, Redshift Forbes Advisor is Looking for: Job Description: Data Warehouse Lead Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. Forbes Marketplace is seeking a highly skilled and experienced Data Warehouse engineer to lead the design, development, implementation, and maintenance of our enterprise data warehouse. We are seeking someone with a strong background in data warehousing principles, ETL/ELT processes and database technologies. You will be responsible for guiding a team of data warehouse developers and ensuring the data warehouse is robust, scalable, performant, and meets the evolving analytical needs of the organization. Responsibilities: Lead the design, development and maintenance of data models optimized for reporting and analysis. Ensure data quality, integrity, and consistency throughout the data warehousing process to enable reliable and timely ingestion of data from the source system. Troubleshoot and resolve issues related to data pipelines and data integrity. Work closely with business analysts and other stakeholders to understand their data needs and provide solutions. Communicate technical concepts to non-technical audiences effectively. Ensure the data warehouse is scalable to accommodate growing data volumes and user demands. Ensure adherence to data governance and privacy policies and procedures. Implement and monitor data quality metrics and processes. Lead and mentor a team of data warehouse developers, providing technical guidance and support. Stay up-to-date with the latest trends and technologies in data warehousing and business intelligence. Foster a collaborative and high-performing team environment. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7-8 years of progressive experience in data warehousing, with at least 3 years in a lead or senior role. Deep understanding of data warehousing concepts, principles and methodologies. Strong proficiency in SQL and experience with various database platforms (e.g., BigQuery, Redshift, Snowflake). Good understanding of Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc - the more the better). Hands-on experience with dbt and other ETL/ELT tools and technologies. Experience with data modeling techniques (e.g., dimensional modeling, star schema, snowflake schema). Experience with cloud-based data warehousing solutions (e.g., AWS, Azure, GCP) - GCP is highly preferred. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, presentation, and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Familiarity with business intelligence and reporting tools (e.g., Tableau, Power BI, Looker). Experience with data governance and data quality frameworks is a plus. Perks: Day off on the 3rd Friday of every month (one long weekend each month). Monthly Wellness Reimbursement Program to promote health well-being. Monthly Office Commutation Reimbursement Program. Paid paternity and maternity leaves. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

P1-C3-STS Seeking a developer who has good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and other AWS serverless resources. Can optimize data models for performance and efficiency. Able to write SQL queries to support data analysis and reporting Design, implement, and maintain the data architecture for all AWS data services. Work with stakeholders to identify business needs and requirements for data-related projects Design and implement ETL processes to load data into the data warehouse Good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift, Cloud Formation and other AWS serverless resources Cloud Formation and other AWS serverless resources Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Surat, Gujarat, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Surat, Gujarat, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: dBT tool, GCP, Affiliate Marketing Data, SQL, ETL, Team Handling, Data warehouse, Snowflake, BigQuery, Redshift Forbes Advisor is Looking for: Job Description: Data Warehouse Lead Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. Forbes Marketplace is seeking a highly skilled and experienced Data Warehouse engineer to lead the design, development, implementation, and maintenance of our enterprise data warehouse. We are seeking someone with a strong background in data warehousing principles, ETL/ELT processes and database technologies. You will be responsible for guiding a team of data warehouse developers and ensuring the data warehouse is robust, scalable, performant, and meets the evolving analytical needs of the organization. Responsibilities: Lead the design, development and maintenance of data models optimized for reporting and analysis. Ensure data quality, integrity, and consistency throughout the data warehousing process to enable reliable and timely ingestion of data from the source system. Troubleshoot and resolve issues related to data pipelines and data integrity. Work closely with business analysts and other stakeholders to understand their data needs and provide solutions. Communicate technical concepts to non-technical audiences effectively. Ensure the data warehouse is scalable to accommodate growing data volumes and user demands. Ensure adherence to data governance and privacy policies and procedures. Implement and monitor data quality metrics and processes. Lead and mentor a team of data warehouse developers, providing technical guidance and support. Stay up-to-date with the latest trends and technologies in data warehousing and business intelligence. Foster a collaborative and high-performing team environment. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7-8 years of progressive experience in data warehousing, with at least 3 years in a lead or senior role. Deep understanding of data warehousing concepts, principles and methodologies. Strong proficiency in SQL and experience with various database platforms (e.g., BigQuery, Redshift, Snowflake). Good understanding of Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc - the more the better). Hands-on experience with dbt and other ETL/ELT tools and technologies. Experience with data modeling techniques (e.g., dimensional modeling, star schema, snowflake schema). Experience with cloud-based data warehousing solutions (e.g., AWS, Azure, GCP) - GCP is highly preferred. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, presentation, and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Familiarity with business intelligence and reporting tools (e.g., Tableau, Power BI, Looker). Experience with data governance and data quality frameworks is a plus. Perks: Day off on the 3rd Friday of every month (one long weekend each month). Monthly Wellness Reimbursement Program to promote health well-being. Monthly Office Commutation Reimbursement Program. Paid paternity and maternity leaves. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: dBT tool, GCP, Affiliate Marketing Data, SQL, ETL, Team Handling, Data warehouse, Snowflake, BigQuery, Redshift Forbes Advisor is Looking for: Job Description: Data Warehouse Lead Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. Forbes Marketplace is seeking a highly skilled and experienced Data Warehouse engineer to lead the design, development, implementation, and maintenance of our enterprise data warehouse. We are seeking someone with a strong background in data warehousing principles, ETL/ELT processes and database technologies. You will be responsible for guiding a team of data warehouse developers and ensuring the data warehouse is robust, scalable, performant, and meets the evolving analytical needs of the organization. Responsibilities: Lead the design, development and maintenance of data models optimized for reporting and analysis. Ensure data quality, integrity, and consistency throughout the data warehousing process to enable reliable and timely ingestion of data from the source system. Troubleshoot and resolve issues related to data pipelines and data integrity. Work closely with business analysts and other stakeholders to understand their data needs and provide solutions. Communicate technical concepts to non-technical audiences effectively. Ensure the data warehouse is scalable to accommodate growing data volumes and user demands. Ensure adherence to data governance and privacy policies and procedures. Implement and monitor data quality metrics and processes. Lead and mentor a team of data warehouse developers, providing technical guidance and support. Stay up-to-date with the latest trends and technologies in data warehousing and business intelligence. Foster a collaborative and high-performing team environment. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7-8 years of progressive experience in data warehousing, with at least 3 years in a lead or senior role. Deep understanding of data warehousing concepts, principles and methodologies. Strong proficiency in SQL and experience with various database platforms (e.g., BigQuery, Redshift, Snowflake). Good understanding of Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc - the more the better). Hands-on experience with dbt and other ETL/ELT tools and technologies. Experience with data modeling techniques (e.g., dimensional modeling, star schema, snowflake schema). Experience with cloud-based data warehousing solutions (e.g., AWS, Azure, GCP) - GCP is highly preferred. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, presentation, and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Familiarity with business intelligence and reporting tools (e.g., Tableau, Power BI, Looker). Experience with data governance and data quality frameworks is a plus. Perks: Day off on the 3rd Friday of every month (one long weekend each month). Monthly Wellness Reimbursement Program to promote health well-being. Monthly Office Commutation Reimbursement Program. Paid paternity and maternity leaves. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: dBT tool, GCP, Affiliate Marketing Data, SQL, ETL, Team Handling, Data warehouse, Snowflake, BigQuery, Redshift Forbes Advisor is Looking for: Job Description: Data Warehouse Lead Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. Forbes Marketplace is seeking a highly skilled and experienced Data Warehouse engineer to lead the design, development, implementation, and maintenance of our enterprise data warehouse. We are seeking someone with a strong background in data warehousing principles, ETL/ELT processes and database technologies. You will be responsible for guiding a team of data warehouse developers and ensuring the data warehouse is robust, scalable, performant, and meets the evolving analytical needs of the organization. Responsibilities: Lead the design, development and maintenance of data models optimized for reporting and analysis. Ensure data quality, integrity, and consistency throughout the data warehousing process to enable reliable and timely ingestion of data from the source system. Troubleshoot and resolve issues related to data pipelines and data integrity. Work closely with business analysts and other stakeholders to understand their data needs and provide solutions. Communicate technical concepts to non-technical audiences effectively. Ensure the data warehouse is scalable to accommodate growing data volumes and user demands. Ensure adherence to data governance and privacy policies and procedures. Implement and monitor data quality metrics and processes. Lead and mentor a team of data warehouse developers, providing technical guidance and support. Stay up-to-date with the latest trends and technologies in data warehousing and business intelligence. Foster a collaborative and high-performing team environment. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7-8 years of progressive experience in data warehousing, with at least 3 years in a lead or senior role. Deep understanding of data warehousing concepts, principles and methodologies. Strong proficiency in SQL and experience with various database platforms (e.g., BigQuery, Redshift, Snowflake). Good understanding of Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc - the more the better). Hands-on experience with dbt and other ETL/ELT tools and technologies. Experience with data modeling techniques (e.g., dimensional modeling, star schema, snowflake schema). Experience with cloud-based data warehousing solutions (e.g., AWS, Azure, GCP) - GCP is highly preferred. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, presentation, and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Familiarity with business intelligence and reporting tools (e.g., Tableau, Power BI, Looker). Experience with data governance and data quality frameworks is a plus. Perks: Day off on the 3rd Friday of every month (one long weekend each month). Monthly Wellness Reimbursement Program to promote health well-being. Monthly Office Commutation Reimbursement Program. Paid paternity and maternity leaves. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Greater Lucknow Area

Remote

Linkedin logo

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Greater Lucknow Area

Remote

Linkedin logo

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: dBT tool, GCP, Affiliate Marketing Data, SQL, ETL, Team Handling, Data warehouse, Snowflake, BigQuery, Redshift Forbes Advisor is Looking for: Job Description: Data Warehouse Lead Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. Forbes Marketplace is seeking a highly skilled and experienced Data Warehouse engineer to lead the design, development, implementation, and maintenance of our enterprise data warehouse. We are seeking someone with a strong background in data warehousing principles, ETL/ELT processes and database technologies. You will be responsible for guiding a team of data warehouse developers and ensuring the data warehouse is robust, scalable, performant, and meets the evolving analytical needs of the organization. Responsibilities: Lead the design, development and maintenance of data models optimized for reporting and analysis. Ensure data quality, integrity, and consistency throughout the data warehousing process to enable reliable and timely ingestion of data from the source system. Troubleshoot and resolve issues related to data pipelines and data integrity. Work closely with business analysts and other stakeholders to understand their data needs and provide solutions. Communicate technical concepts to non-technical audiences effectively. Ensure the data warehouse is scalable to accommodate growing data volumes and user demands. Ensure adherence to data governance and privacy policies and procedures. Implement and monitor data quality metrics and processes. Lead and mentor a team of data warehouse developers, providing technical guidance and support. Stay up-to-date with the latest trends and technologies in data warehousing and business intelligence. Foster a collaborative and high-performing team environment. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7-8 years of progressive experience in data warehousing, with at least 3 years in a lead or senior role. Deep understanding of data warehousing concepts, principles and methodologies. Strong proficiency in SQL and experience with various database platforms (e.g., BigQuery, Redshift, Snowflake). Good understanding of Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc - the more the better). Hands-on experience with dbt and other ETL/ELT tools and technologies. Experience with data modeling techniques (e.g., dimensional modeling, star schema, snowflake schema). Experience with cloud-based data warehousing solutions (e.g., AWS, Azure, GCP) - GCP is highly preferred. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, presentation, and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Familiarity with business intelligence and reporting tools (e.g., Tableau, Power BI, Looker). Experience with data governance and data quality frameworks is a plus. Perks: Day off on the 3rd Friday of every month (one long weekend each month). Monthly Wellness Reimbursement Program to promote health well-being. Monthly Office Commutation Reimbursement Program. Paid paternity and maternity leaves. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Nashik, Maharashtra, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Thane, Maharashtra, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: dBT tool, GCP, Affiliate Marketing Data, SQL, ETL, Team Handling, Data warehouse, Snowflake, BigQuery, Redshift Forbes Advisor is Looking for: Job Description: Data Warehouse Lead Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. Forbes Marketplace is seeking a highly skilled and experienced Data Warehouse engineer to lead the design, development, implementation, and maintenance of our enterprise data warehouse. We are seeking someone with a strong background in data warehousing principles, ETL/ELT processes and database technologies. You will be responsible for guiding a team of data warehouse developers and ensuring the data warehouse is robust, scalable, performant, and meets the evolving analytical needs of the organization. Responsibilities: Lead the design, development and maintenance of data models optimized for reporting and analysis. Ensure data quality, integrity, and consistency throughout the data warehousing process to enable reliable and timely ingestion of data from the source system. Troubleshoot and resolve issues related to data pipelines and data integrity. Work closely with business analysts and other stakeholders to understand their data needs and provide solutions. Communicate technical concepts to non-technical audiences effectively. Ensure the data warehouse is scalable to accommodate growing data volumes and user demands. Ensure adherence to data governance and privacy policies and procedures. Implement and monitor data quality metrics and processes. Lead and mentor a team of data warehouse developers, providing technical guidance and support. Stay up-to-date with the latest trends and technologies in data warehousing and business intelligence. Foster a collaborative and high-performing team environment. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7-8 years of progressive experience in data warehousing, with at least 3 years in a lead or senior role. Deep understanding of data warehousing concepts, principles and methodologies. Strong proficiency in SQL and experience with various database platforms (e.g., BigQuery, Redshift, Snowflake). Good understanding of Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc - the more the better). Hands-on experience with dbt and other ETL/ELT tools and technologies. Experience with data modeling techniques (e.g., dimensional modeling, star schema, snowflake schema). Experience with cloud-based data warehousing solutions (e.g., AWS, Azure, GCP) - GCP is highly preferred. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, presentation, and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Familiarity with business intelligence and reporting tools (e.g., Tableau, Power BI, Looker). Experience with data governance and data quality frameworks is a plus. Perks: Day off on the 3rd Friday of every month (one long weekend each month). Monthly Wellness Reimbursement Program to promote health well-being. Monthly Office Commutation Reimbursement Program. Paid paternity and maternity leaves. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Thane, Maharashtra, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Nashik, Maharashtra, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: dBT tool, GCP, Affiliate Marketing Data, SQL, ETL, Team Handling, Data warehouse, Snowflake, BigQuery, Redshift Forbes Advisor is Looking for: Job Description: Data Warehouse Lead Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. Forbes Marketplace is seeking a highly skilled and experienced Data Warehouse engineer to lead the design, development, implementation, and maintenance of our enterprise data warehouse. We are seeking someone with a strong background in data warehousing principles, ETL/ELT processes and database technologies. You will be responsible for guiding a team of data warehouse developers and ensuring the data warehouse is robust, scalable, performant, and meets the evolving analytical needs of the organization. Responsibilities: Lead the design, development and maintenance of data models optimized for reporting and analysis. Ensure data quality, integrity, and consistency throughout the data warehousing process to enable reliable and timely ingestion of data from the source system. Troubleshoot and resolve issues related to data pipelines and data integrity. Work closely with business analysts and other stakeholders to understand their data needs and provide solutions. Communicate technical concepts to non-technical audiences effectively. Ensure the data warehouse is scalable to accommodate growing data volumes and user demands. Ensure adherence to data governance and privacy policies and procedures. Implement and monitor data quality metrics and processes. Lead and mentor a team of data warehouse developers, providing technical guidance and support. Stay up-to-date with the latest trends and technologies in data warehousing and business intelligence. Foster a collaborative and high-performing team environment. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7-8 years of progressive experience in data warehousing, with at least 3 years in a lead or senior role. Deep understanding of data warehousing concepts, principles and methodologies. Strong proficiency in SQL and experience with various database platforms (e.g., BigQuery, Redshift, Snowflake). Good understanding of Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc - the more the better). Hands-on experience with dbt and other ETL/ELT tools and technologies. Experience with data modeling techniques (e.g., dimensional modeling, star schema, snowflake schema). Experience with cloud-based data warehousing solutions (e.g., AWS, Azure, GCP) - GCP is highly preferred. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, presentation, and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Familiarity with business intelligence and reporting tools (e.g., Tableau, Power BI, Looker). Experience with data governance and data quality frameworks is a plus. Perks: Day off on the 3rd Friday of every month (one long weekend each month). Monthly Wellness Reimbursement Program to promote health well-being. Monthly Office Commutation Reimbursement Program. Paid paternity and maternity leaves. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies