Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Greetings from TCS !!! Saturday Walkin Drive 7th June 25. Job Title : ServiceNow Developer Venue : Location - Chennai/ Banglore. Experience required : 4 to 9 yrs. Keywords - WAS , IIS , DataPower, BPM Must-Have** Candidate should have knowledge on Below activities. WAS Install and Build WebSphere 8.5 and WAS 9 application servers on Linux platform. Troubleshoot and maintain applications on apache web servers and middleware application servers Install and configure local and remote plugins; integrate web infrastructure components Set up vertical, horizontal and hybrid clustered environments for scalability, high availability and failover, migrate applications and process servers Work with other infrastructure teams to fulfill the server prerequisites like server and storage requirements. Create deployment scripts and work with application stakeholders to deploy applications, configure Webservices and provide connectivity, messaging and service-oriented integration to power SOA. Create scripts to automate tasks like JVM stop and start scripts and work with Server tech team to configure run level scripts. Analyze log files and thread dumps, and application specific configurations to troubleshoot applications and execute performance tuning. Meet with developers and other stakeholders to understand the requirements for the new environment build or migrating applications to new WAS version. Install and renew SSL certificates on Apache HTTP servers and WAS servers Enable stats during performance testing and share performance stats analysis to QA team and other stakeholders. Apply WAS fixpack to remediate security vulnerabilities identified by Vendor and security teams. IIS Manage URLs and apps hosted on IIS web servers Manage SSL certificate renewal on IIS servers. Coordinate with Network team for network load balancer config and troubleshooting. Remediate security vulnerabilities reported by security team. IIS application deployment IIS service monitoring IIS application pool restart IIS .net upgrades IIS application pool restart IIS Cache clearing IIS new application configuration BPM Additional and Removal of applications Configuration of new BPM Workflow applications Snapshot deployment of the BPM Workflow applications. Co-ordination with other teams for OS and Database patching Works with the product vendors like IBM to BPM related Issues SSL management Version upgrade of the BPM Workflow products Good-to-Have 1. Excellent written verbal and presentation skills. 2. Flexible to work in shifts 3. Well organized with ability to multi-task and work with minimum supervision with quick learning skills. 4. Strong interpersonal and negotiation skills. Applications Valid till - 7th June 25. **Mandatory Documents- Updated CV, Adhar or Pan Card Copy, Passport Size Photo** Note- Do not apply Freshers. and EX TCSers. Thanks & Regards, Supriya Kashid. TCS HR Recruitment Team (TAG), Pune. Mail to : supriya.kashid@tcs.com Website : http://www.tcs.com Tata Consultancy Services. Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Our team is looking to bring on board a highly skilled Senior MS SQL Server DBA Engineer with AWS migration experience . This role requires a hands-on professional adept at building, managing, and optimizing large-scale SQL database systems on-premises and in the cloud, capable of transforming database infrastructures to leverage AWS capabilities. Responsibilities Build infrastructure in AWS for scalable and efficient data delivery solutions Maintain and manage high-volume database infrastructure on AWS Migrate database servers from on-premises to AWS environments Monitor and manage database systems and data pipelines supporting critical business products Troubleshoot engineering and production issues and provide hands-on expertise to resolve critical problems Support and optimize multi-terabyte database systems through tuning and administration Investigate and resolve data replication latency issues Develop automation solutions to improve team efficiency and infrastructure functionality Coordinate database/software releases across all environments, including QA, Pre-Production, Production, and DR Analyze and solve complex problems using cost/benefit analysis to implement effective solutions Diagnose and address real-time database engine and query performance issues Collaborate with IT, server management, and peer groups to resolve incidents in a timely manner Participate in cross-functional projects with operations, development, and QA teams Review existing processes and recommend improvements using new methods or tools Build and maintain best-in-class product support resources (e.g., run books, monitoring processes, tools, knowledge bases) Ensure ongoing cloud, virtual, and bare-metal database infrastructure support Requirements Bachelor's degree in Computer Science, Information Systems or Engineering 6+ years of work experience in IT 4+ years of hands-on database administration, support, and performance tuning experience on any relational database like Microsoft SQL Server, PostgreSQL or Oracle 2+ years of expertise in building AWS Data Migration Services or other ETL tools Good administration knowledge in Microsoft SQL Server Good familiarity/experience with AWS, specifically infrastructure automation capabilities Good knowledge of Python or PowerShell Excellent communication skills, including strong verbal and written proficiencies Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Surat, Gujarat, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Greater Lucknow Area
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Nashik, Maharashtra, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Thane, Maharashtra, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Our team is looking to bring on board a highly skilled Senior MS SQL Server DBA Engineer with AWS migration experience . This role requires a hands-on professional adept at building, managing, and optimizing large-scale SQL database systems on-premises and in the cloud, capable of transforming database infrastructures to leverage AWS capabilities. Responsibilities Build infrastructure in AWS for scalable and efficient data delivery solutions Maintain and manage high-volume database infrastructure on AWS Migrate database servers from on-premises to AWS environments Monitor and manage database systems and data pipelines supporting critical business products Troubleshoot engineering and production issues and provide hands-on expertise to resolve critical problems Support and optimize multi-terabyte database systems through tuning and administration Investigate and resolve data replication latency issues Develop automation solutions to improve team efficiency and infrastructure functionality Coordinate database/software releases across all environments, including QA, Pre-Production, Production, and DR Analyze and solve complex problems using cost/benefit analysis to implement effective solutions Diagnose and address real-time database engine and query performance issues Collaborate with IT, server management, and peer groups to resolve incidents in a timely manner Participate in cross-functional projects with operations, development, and QA teams Review existing processes and recommend improvements using new methods or tools Build and maintain best-in-class product support resources (e.g., run books, monitoring processes, tools, knowledge bases) Ensure ongoing cloud, virtual, and bare-metal database infrastructure support Requirements Bachelor's degree in Computer Science, Information Systems or Engineering 6+ years of work experience in IT 4+ years of hands-on database administration, support, and performance tuning experience on any relational database like Microsoft SQL Server, PostgreSQL or Oracle 2+ years of expertise in building AWS Data Migration Services or other ETL tools Good administration knowledge in Microsoft SQL Server Good familiarity/experience with AWS, specifically infrastructure automation capabilities Good knowledge of Python or PowerShell Excellent communication skills, including strong verbal and written proficiencies Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Job Title: Webflow Expert Developer (Landing Page Migration) Job Type: Contract / Freelance Timeline: Project kickoff to delivery within 5 days Location: Remote Project Overview: We are looking for a Webflow expert developer to migrate an existing landing page currently hosted on Vercel to Webflow . The ideal candidate should have extensive hands-on experience in Webflow , a strong understanding of responsive design, and a portfolio demonstrating previous Webflow projects . Key Requirements: Proven expertise in Webflow design and development Experience migrating landing pages or websites from other platforms to Webflow Strong knowledge of HTML, CSS , and responsive layouts Ability to match the current design and functionality with high precision Strong attention to detail and a keen eye for design fidelity Deliverables: Fully functional Webflow version of the current landing page Responsive across all major devices and browsers Delivery within 5 days from project kickoff Nice to Have: Experience with SEO optimization in Webflow Familiarity with animations and interactions in Webflow Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our team is looking to bring on board a highly skilled Senior MS SQL Server DBA Engineer with AWS migration experience . This role requires a hands-on professional adept at building, managing, and optimizing large-scale SQL database systems on-premises and in the cloud, capable of transforming database infrastructures to leverage AWS capabilities. Responsibilities Build infrastructure in AWS for scalable and efficient data delivery solutions Maintain and manage high-volume database infrastructure on AWS Migrate database servers from on-premises to AWS environments Monitor and manage database systems and data pipelines supporting critical business products Troubleshoot engineering and production issues and provide hands-on expertise to resolve critical problems Support and optimize multi-terabyte database systems through tuning and administration Investigate and resolve data replication latency issues Develop automation solutions to improve team efficiency and infrastructure functionality Coordinate database/software releases across all environments, including QA, Pre-Production, Production, and DR Analyze and solve complex problems using cost/benefit analysis to implement effective solutions Diagnose and address real-time database engine and query performance issues Collaborate with IT, server management, and peer groups to resolve incidents in a timely manner Participate in cross-functional projects with operations, development, and QA teams Review existing processes and recommend improvements using new methods or tools Build and maintain best-in-class product support resources (e.g., run books, monitoring processes, tools, knowledge bases) Ensure ongoing cloud, virtual, and bare-metal database infrastructure support Requirements Bachelor's degree in Computer Science, Information Systems or Engineering 6+ years of work experience in IT 4+ years of hands-on database administration, support, and performance tuning experience on any relational database like Microsoft SQL Server, PostgreSQL or Oracle 2+ years of expertise in building AWS Data Migration Services or other ETL tools Good administration knowledge in Microsoft SQL Server Good familiarity/experience with AWS, specifically infrastructure automation capabilities Good knowledge of Python or PowerShell Excellent communication skills, including strong verbal and written proficiencies Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Generative AI Consultant for Supply Chain Business Process – Staff The opportunity We’re looking for Staff Level Consultants with expertise in Generative AI who can work on end-to-end pipelines, enabling data curation, building and fine-tuning Generative AI models, and deploying them into scalable production streams. Analytics, AI innovation, and large-scale data solutions, is the key to join the Supply Chain Technology group of our GDS consulting Team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of a new service offering. This role demands a highly technical, extremely hands-on experience who will work closely with our EY Partners and external clients to develop new business as well as drive other initiatives on different business needs for various SC&O domains. The ideal candidate must have a good understanding of the problems and use cases that can be solved with Gen AI models with closest accuracy with Supply Chain industry knowledge and proven experience in delivering solutions to different lines of business and technical leadership. Your Key Responsibilities Design, develop, and fine-tune Generative AI models using frameworks like LangChain, Hugging Face, TensorFlow, or PyTorch. Implement solutions for data pre-processing, tokenization, and dataset augmentation. Deploy Generative AI models on cloud platforms (e.g., AWS, GCP, Azure) or edge devices, ensuring scalability and robustness. Work on MLOps pipelines, including CI/CD workflows, model monitoring, and retraining strategies. Conduct performance benchmarking, hyperparameter tuning, and optimization to improve model efficiency. Stay updated on the latest trends and advancements in Generative AI and integrate best practices into project workflows. Skills And Attributes For Success 2–4 years of hands-on experience in Machine Learning or Generative AI development. Proficiency in Python and libraries like LangChain, TensorFlow, PyTorch, or Hugging Face Transformers. Strong skills in data preprocessing, data cleaning, and working with large datasets. Basic experience with cloud platforms (AWS, GCP, or Azure). Familiarity with foundational Generative AI models (e.g., GPT-3.5, GPT-4, DALL-E, Text Embeddings Ada, LLAMA, T5, Bloom), prompt engineering and their applications. Familiarity with Vector Databases and embeddings (e.g., Pinecone, Weaviate). Hands-on exposure to APIs for integrating Generative AI solutions into business applications. Knowledge of NLP techniques like summarization, text classification, and entity recognition. Certifications in Machine Learning or cloud platforms are a plus. To qualify for the role, you must have 2-4 years ML, MLOps, Generative AI LLMs experience as Developer. Expertise in the Data Engineering, Data transformation, curation, feature selection, ETL Mappings, Data Warehouse concepts. Should be able to design AI solutions and present solutions as per client needs Thorough knowledge in Structured Query Language (SQL), Python, PySpark, Spark, and other languages. Should have experience in developing end-to-end GenAI solutions with capability to migrate the solution for Production. Should have Knowledge of Cloud like Azure, AWS, GCP etc Knowledge of framework like, LangChain, Hugging Face, Azure ML Studio Azure. Knowledge of data modelling and Vector DB management and modelling. Ideally, you’ll also have Strong knowledge of Programming concepts, Cloud Concepts, LLM Models, design and coding Expertise in data handling to resolve any data issues as per client needs Experience in designing and developing DB objects and Vector Dbs. Experience of creating complex SQL queries, PySpark code, Python Scripting for retrieving, manipulating, checking and migrating complex datasets. Experience in Model selection and tuning of model. Good verbal and written communication in English, Strong interpersonal, analytical and problem-solving abilities. What We Look For The incumbent should be able to drive Generative AI and ML related developments. Additional knowledge of data structures preferably in Supply chain Industry will be an advantage. An opportunity to be a part of market-leading, multi-disciplinary team of 10000 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY GDS consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Generative AI Consultant for Supply Chain Business Process – Staff The opportunity We’re looking for Staff Level Consultants with expertise in Generative AI who can work on end-to-end pipelines, enabling data curation, building and fine-tuning Generative AI models, and deploying them into scalable production streams. Analytics, AI innovation, and large-scale data solutions, is the key to join the Supply Chain Technology group of our GDS consulting Team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of a new service offering. This role demands a highly technical, extremely hands-on experience who will work closely with our EY Partners and external clients to develop new business as well as drive other initiatives on different business needs for various SC&O domains. The ideal candidate must have a good understanding of the problems and use cases that can be solved with Gen AI models with closest accuracy with Supply Chain industry knowledge and proven experience in delivering solutions to different lines of business and technical leadership. Your Key Responsibilities Design, develop, and fine-tune Generative AI models using frameworks like LangChain, Hugging Face, TensorFlow, or PyTorch. Implement solutions for data pre-processing, tokenization, and dataset augmentation. Deploy Generative AI models on cloud platforms (e.g., AWS, GCP, Azure) or edge devices, ensuring scalability and robustness. Work on MLOps pipelines, including CI/CD workflows, model monitoring, and retraining strategies. Conduct performance benchmarking, hyperparameter tuning, and optimization to improve model efficiency. Stay updated on the latest trends and advancements in Generative AI and integrate best practices into project workflows. Skills And Attributes For Success 2–4 years of hands-on experience in Machine Learning or Generative AI development. Proficiency in Python and libraries like LangChain, TensorFlow, PyTorch, or Hugging Face Transformers. Strong skills in data preprocessing, data cleaning, and working with large datasets. Basic experience with cloud platforms (AWS, GCP, or Azure). Familiarity with foundational Generative AI models (e.g., GPT-3.5, GPT-4, DALL-E, Text Embeddings Ada, LLAMA, T5, Bloom), prompt engineering and their applications. Familiarity with Vector Databases and embeddings (e.g., Pinecone, Weaviate). Hands-on exposure to APIs for integrating Generative AI solutions into business applications. Knowledge of NLP techniques like summarization, text classification, and entity recognition. Certifications in Machine Learning or cloud platforms are a plus. To qualify for the role, you must have 2-4 years ML, MLOps, Generative AI LLMs experience as Developer. Expertise in the Data Engineering, Data transformation, curation, feature selection, ETL Mappings, Data Warehouse concepts. Should be able to design AI solutions and present solutions as per client needs Thorough knowledge in Structured Query Language (SQL), Python, PySpark, Spark, and other languages. Should have experience in developing end-to-end GenAI solutions with capability to migrate the solution for Production. Should have Knowledge of Cloud like Azure, AWS, GCP etc Knowledge of framework like, LangChain, Hugging Face, Azure ML Studio Azure. Knowledge of data modelling and Vector DB management and modelling. Ideally, you’ll also have Strong knowledge of Programming concepts, Cloud Concepts, LLM Models, design and coding Expertise in data handling to resolve any data issues as per client needs Experience in designing and developing DB objects and Vector Dbs. Experience of creating complex SQL queries, PySpark code, Python Scripting for retrieving, manipulating, checking and migrating complex datasets. Experience in Model selection and tuning of model. Good verbal and written communication in English, Strong interpersonal, analytical and problem-solving abilities. What We Look For The incumbent should be able to drive Generative AI and ML related developments. Additional knowledge of data structures preferably in Supply chain Industry will be an advantage. An opportunity to be a part of market-leading, multi-disciplinary team of 10000 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY GDS consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Generative AI Consultant for Supply Chain Business Process – Staff The opportunity We’re looking for Staff Level Consultants with expertise in Generative AI who can work on end-to-end pipelines, enabling data curation, building and fine-tuning Generative AI models, and deploying them into scalable production streams. Analytics, AI innovation, and large-scale data solutions, is the key to join the Supply Chain Technology group of our GDS consulting Team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of a new service offering. This role demands a highly technical, extremely hands-on experience who will work closely with our EY Partners and external clients to develop new business as well as drive other initiatives on different business needs for various SC&O domains. The ideal candidate must have a good understanding of the problems and use cases that can be solved with Gen AI models with closest accuracy with Supply Chain industry knowledge and proven experience in delivering solutions to different lines of business and technical leadership. Your Key Responsibilities Design, develop, and fine-tune Generative AI models using frameworks like LangChain, Hugging Face, TensorFlow, or PyTorch. Implement solutions for data pre-processing, tokenization, and dataset augmentation. Deploy Generative AI models on cloud platforms (e.g., AWS, GCP, Azure) or edge devices, ensuring scalability and robustness. Work on MLOps pipelines, including CI/CD workflows, model monitoring, and retraining strategies. Conduct performance benchmarking, hyperparameter tuning, and optimization to improve model efficiency. Stay updated on the latest trends and advancements in Generative AI and integrate best practices into project workflows. Skills And Attributes For Success 2–4 years of hands-on experience in Machine Learning or Generative AI development. Proficiency in Python and libraries like LangChain, TensorFlow, PyTorch, or Hugging Face Transformers. Strong skills in data preprocessing, data cleaning, and working with large datasets. Basic experience with cloud platforms (AWS, GCP, or Azure). Familiarity with foundational Generative AI models (e.g., GPT-3.5, GPT-4, DALL-E, Text Embeddings Ada, LLAMA, T5, Bloom), prompt engineering and their applications. Familiarity with Vector Databases and embeddings (e.g., Pinecone, Weaviate). Hands-on exposure to APIs for integrating Generative AI solutions into business applications. Knowledge of NLP techniques like summarization, text classification, and entity recognition. Certifications in Machine Learning or cloud platforms are a plus. To qualify for the role, you must have 2-4 years ML, MLOps, Generative AI LLMs experience as Developer. Expertise in the Data Engineering, Data transformation, curation, feature selection, ETL Mappings, Data Warehouse concepts. Should be able to design AI solutions and present solutions as per client needs Thorough knowledge in Structured Query Language (SQL), Python, PySpark, Spark, and other languages. Should have experience in developing end-to-end GenAI solutions with capability to migrate the solution for Production. Should have Knowledge of Cloud like Azure, AWS, GCP etc Knowledge of framework like, LangChain, Hugging Face, Azure ML Studio Azure. Knowledge of data modelling and Vector DB management and modelling. Ideally, you’ll also have Strong knowledge of Programming concepts, Cloud Concepts, LLM Models, design and coding Expertise in data handling to resolve any data issues as per client needs Experience in designing and developing DB objects and Vector Dbs. Experience of creating complex SQL queries, PySpark code, Python Scripting for retrieving, manipulating, checking and migrating complex datasets. Experience in Model selection and tuning of model. Good verbal and written communication in English, Strong interpersonal, analytical and problem-solving abilities. What We Look For The incumbent should be able to drive Generative AI and ML related developments. Additional knowledge of data structures preferably in Supply chain Industry will be an advantage. An opportunity to be a part of market-leading, multi-disciplinary team of 10000 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY GDS consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Generative AI Consultant for Supply Chain Business Process – Staff The opportunity We’re looking for Staff Level Consultants with expertise in Generative AI who can work on end-to-end pipelines, enabling data curation, building and fine-tuning Generative AI models, and deploying them into scalable production streams. Analytics, AI innovation, and large-scale data solutions, is the key to join the Supply Chain Technology group of our GDS consulting Team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of a new service offering. This role demands a highly technical, extremely hands-on experience who will work closely with our EY Partners and external clients to develop new business as well as drive other initiatives on different business needs for various SC&O domains. The ideal candidate must have a good understanding of the problems and use cases that can be solved with Gen AI models with closest accuracy with Supply Chain industry knowledge and proven experience in delivering solutions to different lines of business and technical leadership. Your Key Responsibilities Design, develop, and fine-tune Generative AI models using frameworks like LangChain, Hugging Face, TensorFlow, or PyTorch. Implement solutions for data pre-processing, tokenization, and dataset augmentation. Deploy Generative AI models on cloud platforms (e.g., AWS, GCP, Azure) or edge devices, ensuring scalability and robustness. Work on MLOps pipelines, including CI/CD workflows, model monitoring, and retraining strategies. Conduct performance benchmarking, hyperparameter tuning, and optimization to improve model efficiency. Stay updated on the latest trends and advancements in Generative AI and integrate best practices into project workflows. Skills And Attributes For Success 2–4 years of hands-on experience in Machine Learning or Generative AI development. Proficiency in Python and libraries like LangChain, TensorFlow, PyTorch, or Hugging Face Transformers. Strong skills in data preprocessing, data cleaning, and working with large datasets. Basic experience with cloud platforms (AWS, GCP, or Azure). Familiarity with foundational Generative AI models (e.g., GPT-3.5, GPT-4, DALL-E, Text Embeddings Ada, LLAMA, T5, Bloom), prompt engineering and their applications. Familiarity with Vector Databases and embeddings (e.g., Pinecone, Weaviate). Hands-on exposure to APIs for integrating Generative AI solutions into business applications. Knowledge of NLP techniques like summarization, text classification, and entity recognition. Certifications in Machine Learning or cloud platforms are a plus. To qualify for the role, you must have 2-4 years ML, MLOps, Generative AI LLMs experience as Developer. Expertise in the Data Engineering, Data transformation, curation, feature selection, ETL Mappings, Data Warehouse concepts. Should be able to design AI solutions and present solutions as per client needs Thorough knowledge in Structured Query Language (SQL), Python, PySpark, Spark, and other languages. Should have experience in developing end-to-end GenAI solutions with capability to migrate the solution for Production. Should have Knowledge of Cloud like Azure, AWS, GCP etc Knowledge of framework like, LangChain, Hugging Face, Azure ML Studio Azure. Knowledge of data modelling and Vector DB management and modelling. Ideally, you’ll also have Strong knowledge of Programming concepts, Cloud Concepts, LLM Models, design and coding Expertise in data handling to resolve any data issues as per client needs Experience in designing and developing DB objects and Vector Dbs. Experience of creating complex SQL queries, PySpark code, Python Scripting for retrieving, manipulating, checking and migrating complex datasets. Experience in Model selection and tuning of model. Good verbal and written communication in English, Strong interpersonal, analytical and problem-solving abilities. What We Look For The incumbent should be able to drive Generative AI and ML related developments. Additional knowledge of data structures preferably in Supply chain Industry will be an advantage. An opportunity to be a part of market-leading, multi-disciplinary team of 10000 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY GDS consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Generative AI Consultant for Supply Chain Business Process – Staff The opportunity We’re looking for Staff Level Consultants with expertise in Generative AI who can work on end-to-end pipelines, enabling data curation, building and fine-tuning Generative AI models, and deploying them into scalable production streams. Analytics, AI innovation, and large-scale data solutions, is the key to join the Supply Chain Technology group of our GDS consulting Team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of a new service offering. This role demands a highly technical, extremely hands-on experience who will work closely with our EY Partners and external clients to develop new business as well as drive other initiatives on different business needs for various SC&O domains. The ideal candidate must have a good understanding of the problems and use cases that can be solved with Gen AI models with closest accuracy with Supply Chain industry knowledge and proven experience in delivering solutions to different lines of business and technical leadership. Your Key Responsibilities Design, develop, and fine-tune Generative AI models using frameworks like LangChain, Hugging Face, TensorFlow, or PyTorch. Implement solutions for data pre-processing, tokenization, and dataset augmentation. Deploy Generative AI models on cloud platforms (e.g., AWS, GCP, Azure) or edge devices, ensuring scalability and robustness. Work on MLOps pipelines, including CI/CD workflows, model monitoring, and retraining strategies. Conduct performance benchmarking, hyperparameter tuning, and optimization to improve model efficiency. Stay updated on the latest trends and advancements in Generative AI and integrate best practices into project workflows. Skills And Attributes For Success 2–4 years of hands-on experience in Machine Learning or Generative AI development. Proficiency in Python and libraries like LangChain, TensorFlow, PyTorch, or Hugging Face Transformers. Strong skills in data preprocessing, data cleaning, and working with large datasets. Basic experience with cloud platforms (AWS, GCP, or Azure). Familiarity with foundational Generative AI models (e.g., GPT-3.5, GPT-4, DALL-E, Text Embeddings Ada, LLAMA, T5, Bloom), prompt engineering and their applications. Familiarity with Vector Databases and embeddings (e.g., Pinecone, Weaviate). Hands-on exposure to APIs for integrating Generative AI solutions into business applications. Knowledge of NLP techniques like summarization, text classification, and entity recognition. Certifications in Machine Learning or cloud platforms are a plus. To qualify for the role, you must have 2-4 years ML, MLOps, Generative AI LLMs experience as Developer. Expertise in the Data Engineering, Data transformation, curation, feature selection, ETL Mappings, Data Warehouse concepts. Should be able to design AI solutions and present solutions as per client needs Thorough knowledge in Structured Query Language (SQL), Python, PySpark, Spark, and other languages. Should have experience in developing end-to-end GenAI solutions with capability to migrate the solution for Production. Should have Knowledge of Cloud like Azure, AWS, GCP etc Knowledge of framework like, LangChain, Hugging Face, Azure ML Studio Azure. Knowledge of data modelling and Vector DB management and modelling. Ideally, you’ll also have Strong knowledge of Programming concepts, Cloud Concepts, LLM Models, design and coding Expertise in data handling to resolve any data issues as per client needs Experience in designing and developing DB objects and Vector Dbs. Experience of creating complex SQL queries, PySpark code, Python Scripting for retrieving, manipulating, checking and migrating complex datasets. Experience in Model selection and tuning of model. Good verbal and written communication in English, Strong interpersonal, analytical and problem-solving abilities. What We Look For The incumbent should be able to drive Generative AI and ML related developments. Additional knowledge of data structures preferably in Supply chain Industry will be an advantage. An opportunity to be a part of market-leading, multi-disciplinary team of 10000 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY GDS consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Generative AI Consultant for Supply Chain Business Process – Staff The opportunity We’re looking for Staff Level Consultants with expertise in Generative AI who can work on end-to-end pipelines, enabling data curation, building and fine-tuning Generative AI models, and deploying them into scalable production streams. Analytics, AI innovation, and large-scale data solutions, is the key to join the Supply Chain Technology group of our GDS consulting Team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of a new service offering. This role demands a highly technical, extremely hands-on experience who will work closely with our EY Partners and external clients to develop new business as well as drive other initiatives on different business needs for various SC&O domains. The ideal candidate must have a good understanding of the problems and use cases that can be solved with Gen AI models with closest accuracy with Supply Chain industry knowledge and proven experience in delivering solutions to different lines of business and technical leadership. Your Key Responsibilities Design, develop, and fine-tune Generative AI models using frameworks like LangChain, Hugging Face, TensorFlow, or PyTorch. Implement solutions for data pre-processing, tokenization, and dataset augmentation. Deploy Generative AI models on cloud platforms (e.g., AWS, GCP, Azure) or edge devices, ensuring scalability and robustness. Work on MLOps pipelines, including CI/CD workflows, model monitoring, and retraining strategies. Conduct performance benchmarking, hyperparameter tuning, and optimization to improve model efficiency. Stay updated on the latest trends and advancements in Generative AI and integrate best practices into project workflows. Skills And Attributes For Success 2–4 years of hands-on experience in Machine Learning or Generative AI development. Proficiency in Python and libraries like LangChain, TensorFlow, PyTorch, or Hugging Face Transformers. Strong skills in data preprocessing, data cleaning, and working with large datasets. Basic experience with cloud platforms (AWS, GCP, or Azure). Familiarity with foundational Generative AI models (e.g., GPT-3.5, GPT-4, DALL-E, Text Embeddings Ada, LLAMA, T5, Bloom), prompt engineering and their applications. Familiarity with Vector Databases and embeddings (e.g., Pinecone, Weaviate). Hands-on exposure to APIs for integrating Generative AI solutions into business applications. Knowledge of NLP techniques like summarization, text classification, and entity recognition. Certifications in Machine Learning or cloud platforms are a plus. To qualify for the role, you must have 2-4 years ML, MLOps, Generative AI LLMs experience as Developer. Expertise in the Data Engineering, Data transformation, curation, feature selection, ETL Mappings, Data Warehouse concepts. Should be able to design AI solutions and present solutions as per client needs Thorough knowledge in Structured Query Language (SQL), Python, PySpark, Spark, and other languages. Should have experience in developing end-to-end GenAI solutions with capability to migrate the solution for Production. Should have Knowledge of Cloud like Azure, AWS, GCP etc Knowledge of framework like, LangChain, Hugging Face, Azure ML Studio Azure. Knowledge of data modelling and Vector DB management and modelling. Ideally, you’ll also have Strong knowledge of Programming concepts, Cloud Concepts, LLM Models, design and coding Expertise in data handling to resolve any data issues as per client needs Experience in designing and developing DB objects and Vector Dbs. Experience of creating complex SQL queries, PySpark code, Python Scripting for retrieving, manipulating, checking and migrating complex datasets. Experience in Model selection and tuning of model. Good verbal and written communication in English, Strong interpersonal, analytical and problem-solving abilities. What We Look For The incumbent should be able to drive Generative AI and ML related developments. Additional knowledge of data structures preferably in Supply chain Industry will be an advantage. An opportunity to be a part of market-leading, multi-disciplinary team of 10000 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY GDS consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Generative AI Consultant for Supply Chain Business Process – Staff The opportunity We’re looking for Staff Level Consultants with expertise in Generative AI who can work on end-to-end pipelines, enabling data curation, building and fine-tuning Generative AI models, and deploying them into scalable production streams. Analytics, AI innovation, and large-scale data solutions, is the key to join the Supply Chain Technology group of our GDS consulting Team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of a new service offering. This role demands a highly technical, extremely hands-on experience who will work closely with our EY Partners and external clients to develop new business as well as drive other initiatives on different business needs for various SC&O domains. The ideal candidate must have a good understanding of the problems and use cases that can be solved with Gen AI models with closest accuracy with Supply Chain industry knowledge and proven experience in delivering solutions to different lines of business and technical leadership. Your Key Responsibilities Design, develop, and fine-tune Generative AI models using frameworks like LangChain, Hugging Face, TensorFlow, or PyTorch. Implement solutions for data pre-processing, tokenization, and dataset augmentation. Deploy Generative AI models on cloud platforms (e.g., AWS, GCP, Azure) or edge devices, ensuring scalability and robustness. Work on MLOps pipelines, including CI/CD workflows, model monitoring, and retraining strategies. Conduct performance benchmarking, hyperparameter tuning, and optimization to improve model efficiency. Stay updated on the latest trends and advancements in Generative AI and integrate best practices into project workflows. Skills And Attributes For Success 2–4 years of hands-on experience in Machine Learning or Generative AI development. Proficiency in Python and libraries like LangChain, TensorFlow, PyTorch, or Hugging Face Transformers. Strong skills in data preprocessing, data cleaning, and working with large datasets. Basic experience with cloud platforms (AWS, GCP, or Azure). Familiarity with foundational Generative AI models (e.g., GPT-3.5, GPT-4, DALL-E, Text Embeddings Ada, LLAMA, T5, Bloom), prompt engineering and their applications. Familiarity with Vector Databases and embeddings (e.g., Pinecone, Weaviate). Hands-on exposure to APIs for integrating Generative AI solutions into business applications. Knowledge of NLP techniques like summarization, text classification, and entity recognition. Certifications in Machine Learning or cloud platforms are a plus. To qualify for the role, you must have 2-4 years ML, MLOps, Generative AI LLMs experience as Developer. Expertise in the Data Engineering, Data transformation, curation, feature selection, ETL Mappings, Data Warehouse concepts. Should be able to design AI solutions and present solutions as per client needs Thorough knowledge in Structured Query Language (SQL), Python, PySpark, Spark, and other languages. Should have experience in developing end-to-end GenAI solutions with capability to migrate the solution for Production. Should have Knowledge of Cloud like Azure, AWS, GCP etc Knowledge of framework like, LangChain, Hugging Face, Azure ML Studio Azure. Knowledge of data modelling and Vector DB management and modelling. Ideally, you’ll also have Strong knowledge of Programming concepts, Cloud Concepts, LLM Models, design and coding Expertise in data handling to resolve any data issues as per client needs Experience in designing and developing DB objects and Vector Dbs. Experience of creating complex SQL queries, PySpark code, Python Scripting for retrieving, manipulating, checking and migrating complex datasets. Experience in Model selection and tuning of model. Good verbal and written communication in English, Strong interpersonal, analytical and problem-solving abilities. What We Look For The incumbent should be able to drive Generative AI and ML related developments. Additional knowledge of data structures preferably in Supply chain Industry will be an advantage. An opportunity to be a part of market-leading, multi-disciplinary team of 10000 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY GDS consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Generative AI Consultant for Supply Chain Business Process – Staff The opportunity We’re looking for Staff Level Consultants with expertise in Generative AI who can work on end-to-end pipelines, enabling data curation, building and fine-tuning Generative AI models, and deploying them into scalable production streams. Analytics, AI innovation, and large-scale data solutions, is the key to join the Supply Chain Technology group of our GDS consulting Team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of a new service offering. This role demands a highly technical, extremely hands-on experience who will work closely with our EY Partners and external clients to develop new business as well as drive other initiatives on different business needs for various SC&O domains. The ideal candidate must have a good understanding of the problems and use cases that can be solved with Gen AI models with closest accuracy with Supply Chain industry knowledge and proven experience in delivering solutions to different lines of business and technical leadership. Your Key Responsibilities Design, develop, and fine-tune Generative AI models using frameworks like LangChain, Hugging Face, TensorFlow, or PyTorch. Implement solutions for data pre-processing, tokenization, and dataset augmentation. Deploy Generative AI models on cloud platforms (e.g., AWS, GCP, Azure) or edge devices, ensuring scalability and robustness. Work on MLOps pipelines, including CI/CD workflows, model monitoring, and retraining strategies. Conduct performance benchmarking, hyperparameter tuning, and optimization to improve model efficiency. Stay updated on the latest trends and advancements in Generative AI and integrate best practices into project workflows. Skills And Attributes For Success 2–4 years of hands-on experience in Machine Learning or Generative AI development. Proficiency in Python and libraries like LangChain, TensorFlow, PyTorch, or Hugging Face Transformers. Strong skills in data preprocessing, data cleaning, and working with large datasets. Basic experience with cloud platforms (AWS, GCP, or Azure). Familiarity with foundational Generative AI models (e.g., GPT-3.5, GPT-4, DALL-E, Text Embeddings Ada, LLAMA, T5, Bloom), prompt engineering and their applications. Familiarity with Vector Databases and embeddings (e.g., Pinecone, Weaviate). Hands-on exposure to APIs for integrating Generative AI solutions into business applications. Knowledge of NLP techniques like summarization, text classification, and entity recognition. Certifications in Machine Learning or cloud platforms are a plus. To qualify for the role, you must have 2-4 years ML, MLOps, Generative AI LLMs experience as Developer. Expertise in the Data Engineering, Data transformation, curation, feature selection, ETL Mappings, Data Warehouse concepts. Should be able to design AI solutions and present solutions as per client needs Thorough knowledge in Structured Query Language (SQL), Python, PySpark, Spark, and other languages. Should have experience in developing end-to-end GenAI solutions with capability to migrate the solution for Production. Should have Knowledge of Cloud like Azure, AWS, GCP etc Knowledge of framework like, LangChain, Hugging Face, Azure ML Studio Azure. Knowledge of data modelling and Vector DB management and modelling. Ideally, you’ll also have Strong knowledge of Programming concepts, Cloud Concepts, LLM Models, design and coding Expertise in data handling to resolve any data issues as per client needs Experience in designing and developing DB objects and Vector Dbs. Experience of creating complex SQL queries, PySpark code, Python Scripting for retrieving, manipulating, checking and migrating complex datasets. Experience in Model selection and tuning of model. Good verbal and written communication in English, Strong interpersonal, analytical and problem-solving abilities. What We Look For The incumbent should be able to drive Generative AI and ML related developments. Additional knowledge of data structures preferably in Supply chain Industry will be an advantage. An opportunity to be a part of market-leading, multi-disciplinary team of 10000 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY GDS consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Flowserve is a world-leading manufacturer and aftermarket service provider of comprehensive flow control systems. Join a company whose people are committed to building a more sustainable future to make the world better for everyone. With 16,000+ employees in 50+ countries, we combine our global reach with local presence. Our team challenges themselves to approach each situation with ingenuity and creativity to help provide our customers with the most innovative flow control products and services. We support 10,000+ customers worldwide, creating products to meet the needs of our customers who are supplying energy, fresh water, pharmaceuticals and other essentials to consumers, businesses and governments globally. We invite you to put your talents and career in motion at Flowserve. Company Overview: If a culture of excellence, innovation and ownership is what you’re searching for, consider putting your experience in motion at Flowserve. As an individual contributor, or as a leader of people, your enterprise mindset will ensure Flowserve’s position as the global standard in comprehensive flow control solutions. Here, your opportunity for professional development and industry leading rewards will be supported by our foundational commitments to the values of people first, integrity and safety. Thinking beyond opportunity and reward, at Flowserve, we are inspired by working together to create extraordinary flow control solutions to make the world better for everyone! Job Summary: Flowserve is looking for a Buyer to help achieve our team goals by guaranteeing the purchase of subcontracted components and services at the best conditions of price, quality and deadlines, in accordance with recommendations of global Supply Chain Team. In this role you will develop and implement purchasing strategies by supporting the Commercial Operations team regarding advance procurement activities, developing our strategy in the engineered markets, with the goal of increasing our profitability and our on-time delivery. You will work on strengthening the alignment of the Company's Purchasing strategy and business strategy. Having the full ownership of the suppliers portfolio, you will work in close relationship with supply team to improve our delivery. SCOPE OF RESPONSIBILITY Set as example of Organization’s ethical standards Foster culture change – Act with purpose, core values, and behaviors to promote professional sourcing mindset. Lead execution of Project Sourcing related functions: devise supply strategy, performing risk assessment of all suppliers, executing RFPs/bids, negotiating contracts and ensuring effective transition to operations during implementation Support booking plan by executing the Project sourcing strategies with cross-functional Stakeholders to enable the highest levels of customer satisfaction, competitiveness, quality and on-time-delivery Execute and lead Project sourcing strategy in tune with Global Supply Chain global strategies for high impact purchases in the region. Foster Teamwork, provide strategy, expert consultation and collaborate with goals alignment across platforms and functions with an enterprise mindset. Make recommendations to migrate spend to LCS or accordingly to reduce cost impacts, promote savings initiatives to maximize savings impacts Accountable for negotiations for best possible cost and service guarantee and develop “win-win” strategies that achieve sustainable relationships with suppliers Promote Digitalization of Supply chain to achieve spend visibility, reduced cycle time and deliver savings objectives. Develop and maintain tools to track Market Level Prices for high impact purchases. Promote Project Sourcing value add to stakeholders with timely report of Key Results Area Indices Collaborate with the Platforms and Sites to support orders placement in a timely fashion, carrying out vendor’s offer at tender. Support site and Platform on AOP and savings report. Challenge existing processes with contemporary best industry practices, continuously seek to improve productivity and efficiency of processes, share and adopt the best practices to achieve the highest performance levels of Key result Area. MAIN OBJECTIVES Accountable for Project Supply Chain Support to Sales and Commercial Operations to achieve “Best in Market” Support, Costings and competitiveness to optimize market share. Ensuring the organization benefit from contractual terms that flow down from clients by prescribing the correct terms to suppliers. Delivering optimized cost savings and key results for the organization Competencies And Skills 6+ years of experience in Supply Chain, Engineering or Project Management roles Minimum as Diploma in Engineering, Supply Chain or related, accredited to perform at professional level Manufacturing, Engineering, Flow Management or related business experience Able to Read and Understand engineering drawing and documents Strong communication and interpersonal skill with adaptability to various culture Good business acumen and negotiation skills Good influencing capability and effective listening skills MS Office Skills PREFERENCES Certified Supply Chain Professional Experience in Project Management Knowledge of SAP and sourcing tools like Ariba Knowledge of CRM tools like Salesforce Req ID : R-11521 Job Family Group : Logistics Job Family : LO Purchasing EOE including Disability/Protected Veterans. Flowserve will also not discriminate against an applicant or employee for inquiring about, discussing or disclosing their pay or, in certain circumstances, the pay of their co-workers. Pay Transparency Nondiscrimination Provision If you are a qualified individual with a disability or a disabled veteran, you have the right to request a reasonable accommodation if you are unable or limited in your ability to use or access flowservecareers.com as result of your disability. You can request a reasonable accommodation by sending an email to employment@flowserve.com. In order to quickly respond to your request, please use the words "Accommodation Request" as your subject line of your email. For more information, read the Accessibility Process. Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for migrate professionals in India is currently thriving, with numerous opportunities available in various industries. Whether you are just starting your career or looking to make a job transition, migrate roles can offer a rewarding career path with growth opportunities.
These cities are known for their booming IT sectors and have a high demand for migrate professionals.
The average salary range for migrate professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 3-5 lakhs per annum, while experienced professionals can command salaries upwards of INR 10-15 lakhs per annum.
A typical career path in the migrate field may involve starting as a Junior Developer, progressing to a Senior Developer, then moving up to a Tech Lead role. With experience and expertise, one could further advance to roles like Solution Architect or Project Manager.
In addition to migrate skills, professionals in this field are often expected to have knowledge in related areas such as cloud computing, database management, programming languages like Java or Python, and software development methodologies.
As you explore opportunities in the migrate job market in India, remember to showcase your skills and experience confidently during interviews. Prepare thoroughly, stay updated on industry trends, and demonstrate your passion for data migration. Best of luck on your job search journey!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.