Home
Jobs

287 Rollback Jobs - Page 11

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

13.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: DBT, GCP, DWH, Data Modelling, Data Governance, data quality, data monitoring, Cost Management, Multi Cloud Forbes Advisor is Looking for: Company Description Forbes Advisor, part of the Forbes Marketplace family, provides consumers with expert-written insights, news, and reviews on personal finance, health, business, and everyday life decisions. We empower our audience with data-driven knowledge so they can make informed choices confidently—balancing the agility of a startup with the stability of a seasoned enterprise Role Overview The Senior Data Architect is a strategic, senior leadership role responsible for setting the vision and direction of our data warehousing function. You will architect, implement, and maintain a state-of-the-art data warehouse that drives actionable insights across revenue, subscriptions, paid marketing channels, and operational functions. Your leadership will ensure data quality, robust pipeline design, and seamless integration with business intelligence tools. This role requires a strong mix of technical acumen, team management, and cross-functional collaboration— especially with teams focused on SEM, Digital Experiences, and revenue attribution Job Description Key Responsibilities Strategic Data Architecture & Pipeline Leadership Vision & Strategy: ○Define and execute the long-term strategy for our data warehousing platform using medallion architecture (Bronze, Silver, Gold layers) and modern cloud-based solutions. End-to-End Pipeline Oversight: ○Oversee data ingestion (via Google Ads, Bing Ads, Facebook Ads, GA, APIs, SFTP, etc.), transformation (leveraging DBT, and SQL [via BigQuery]), and reporting, ensuring that our pipelines are robust and scalable. Data Modeling Best Practices: ○Champion best practices in data modeling, including the effective use of DBT packages to streamline complex transformations. Data Quality, Governance & Attribution Quality & Validation: ○Establish and enforce rigorous data quality standards, governance policies, and automated validation frameworks across all data streams. Standardization & Visibility: ○Collaborate with the Data Engineering, Insights and BIOps team to standardize data definitions (including engagement metrics and revenue attribution) and ensure consistency across all reports. Attribution Focus: ○Develop frameworks to reconcile revenue discrepancies and unify validation across Finance, SEM, and Analytics teams. ○Ensure accurate attribution of revenue and paid marketing channel performance, working closely with SEM and Digital Experiences teams. Monitoring & Alerting: ○Implement robust monitoring and alerting systems (e.g., Slack and email notifications) to quickly identify, diagnose, and resolve data pipeline issues. Team Leadership & Cross-Functional Collaboration People & Process: ○Lead, mentor, and grow a high-performing team of data warehousing specialists, fostering a culture of accountability, innovation, and continuous improvement. Stakeholder Engagement: ○Partner with RevOps, Analytics, SEM, Finance, and Product teams to align the data infrastructure with business objectives. ○Serve as the primary data warehouse expert in discussions around revenue attribution and paid marketing channel performance, ensuring that business requirements drive technical solutions. Communication: ○Translate complex technical concepts into clear business insights for both technical and non-technical stakeholders. Operational Excellence & Process Improvement Deployment & QA: ○Oversee deployment processes, including staging, QA, and rollback strategies, to ensure minimal disruption during updates. Continuous Optimization: ○Regularly assess and optimize data pipelines for performance, scalability, and reliability while reducing operational overhead. Legacy to Cloud Transition: ○Lead initiatives to transition from legacy on-premise systems to modern cloud-based architectures for improved agility and cost efficiency. Innovation & Thought Leadership Emerging Trends: ○Stay abreast of emerging trends and technologies in data warehousing, analytics, and cloud solutions. Pilot Projects: ○Propose and lead innovative projects to enhance our data capabilities, with a particular focus on predictive and prescriptive analytics. Executive Representation: ○Represent the data warehousing function in senior leadership discussions and strategic planning sessions Qualifications Education & Experience Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or a related field. 15+ years of experience in data engineering, warehousing, or analytics roles, with at least 5+ years in a leadership capacity. Proven track record in designing and implementing scalable data warehousing solutions in cloud environments. Technical Expertise Deep experience with medallion architecture and modern data pipeline tools, including DBT (and DBT packages), Databricks, SQL, and cloud-based data platforms. Strong understanding of ETL/ELT best practices, data modeling (logical and physical), and large-scale data processing. Hands-on experience with BI tools (e.g., Tableau, Looker) and familiarity with Google Analytics, and other tracking systems. Solid understanding of attribution models (first-touch, last-touch, multi- touch) and experience working with paid marketing channels. Leadership & Communication Excellent leadership and team management skills with the ability to mentor and inspire cross-functional teams. Outstanding communication skills, capable of distilling complex technical information into clear business insights. Demonstrated ability to lead strategic initiatives, manage competing priorities, and deliver results in a fast-paced environment. Perks & Benefits Flexible/Remote Working: Enjoy flexible work arrangements in a collaborative, distributed team culture. Competitive Compensation: Attractive salary, performance-based bonuses, and comprehensive benefits. Time Off: Generous paid time off, parental leave policies, and a dedicated day off on the 3rd Friday of each month. If you are a visionary leader with a passion for building resilient data infrastructures, a deep understanding of revenue attribution and paid marketing channels, and a proven ability to drive strategic business outcomes through data, we invite you to join our Data & Analytics team and shape the future of our data warehousing function. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 weeks ago

Apply

13.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: DBT, GCP, DWH, Data Modelling, Data Governance, data quality, data monitoring, Cost Management, Multi Cloud Forbes Advisor is Looking for: Company Description Forbes Advisor, part of the Forbes Marketplace family, provides consumers with expert-written insights, news, and reviews on personal finance, health, business, and everyday life decisions. We empower our audience with data-driven knowledge so they can make informed choices confidently—balancing the agility of a startup with the stability of a seasoned enterprise Role Overview The Senior Data Architect is a strategic, senior leadership role responsible for setting the vision and direction of our data warehousing function. You will architect, implement, and maintain a state-of-the-art data warehouse that drives actionable insights across revenue, subscriptions, paid marketing channels, and operational functions. Your leadership will ensure data quality, robust pipeline design, and seamless integration with business intelligence tools. This role requires a strong mix of technical acumen, team management, and cross-functional collaboration— especially with teams focused on SEM, Digital Experiences, and revenue attribution Job Description Key Responsibilities Strategic Data Architecture & Pipeline Leadership Vision & Strategy: ○Define and execute the long-term strategy for our data warehousing platform using medallion architecture (Bronze, Silver, Gold layers) and modern cloud-based solutions. End-to-End Pipeline Oversight: ○Oversee data ingestion (via Google Ads, Bing Ads, Facebook Ads, GA, APIs, SFTP, etc.), transformation (leveraging DBT, and SQL [via BigQuery]), and reporting, ensuring that our pipelines are robust and scalable. Data Modeling Best Practices: ○Champion best practices in data modeling, including the effective use of DBT packages to streamline complex transformations. Data Quality, Governance & Attribution Quality & Validation: ○Establish and enforce rigorous data quality standards, governance policies, and automated validation frameworks across all data streams. Standardization & Visibility: ○Collaborate with the Data Engineering, Insights and BIOps team to standardize data definitions (including engagement metrics and revenue attribution) and ensure consistency across all reports. Attribution Focus: ○Develop frameworks to reconcile revenue discrepancies and unify validation across Finance, SEM, and Analytics teams. ○Ensure accurate attribution of revenue and paid marketing channel performance, working closely with SEM and Digital Experiences teams. Monitoring & Alerting: ○Implement robust monitoring and alerting systems (e.g., Slack and email notifications) to quickly identify, diagnose, and resolve data pipeline issues. Team Leadership & Cross-Functional Collaboration People & Process: ○Lead, mentor, and grow a high-performing team of data warehousing specialists, fostering a culture of accountability, innovation, and continuous improvement. Stakeholder Engagement: ○Partner with RevOps, Analytics, SEM, Finance, and Product teams to align the data infrastructure with business objectives. ○Serve as the primary data warehouse expert in discussions around revenue attribution and paid marketing channel performance, ensuring that business requirements drive technical solutions. Communication: ○Translate complex technical concepts into clear business insights for both technical and non-technical stakeholders. Operational Excellence & Process Improvement Deployment & QA: ○Oversee deployment processes, including staging, QA, and rollback strategies, to ensure minimal disruption during updates. Continuous Optimization: ○Regularly assess and optimize data pipelines for performance, scalability, and reliability while reducing operational overhead. Legacy to Cloud Transition: ○Lead initiatives to transition from legacy on-premise systems to modern cloud-based architectures for improved agility and cost efficiency. Innovation & Thought Leadership Emerging Trends: ○Stay abreast of emerging trends and technologies in data warehousing, analytics, and cloud solutions. Pilot Projects: ○Propose and lead innovative projects to enhance our data capabilities, with a particular focus on predictive and prescriptive analytics. Executive Representation: ○Represent the data warehousing function in senior leadership discussions and strategic planning sessions Qualifications Education & Experience Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or a related field. 15+ years of experience in data engineering, warehousing, or analytics roles, with at least 5+ years in a leadership capacity. Proven track record in designing and implementing scalable data warehousing solutions in cloud environments. Technical Expertise Deep experience with medallion architecture and modern data pipeline tools, including DBT (and DBT packages), Databricks, SQL, and cloud-based data platforms. Strong understanding of ETL/ELT best practices, data modeling (logical and physical), and large-scale data processing. Hands-on experience with BI tools (e.g., Tableau, Looker) and familiarity with Google Analytics, and other tracking systems. Solid understanding of attribution models (first-touch, last-touch, multi- touch) and experience working with paid marketing channels. Leadership & Communication Excellent leadership and team management skills with the ability to mentor and inspire cross-functional teams. Outstanding communication skills, capable of distilling complex technical information into clear business insights. Demonstrated ability to lead strategic initiatives, manage competing priorities, and deliver results in a fast-paced environment. Perks & Benefits Flexible/Remote Working: Enjoy flexible work arrangements in a collaborative, distributed team culture. Competitive Compensation: Attractive salary, performance-based bonuses, and comprehensive benefits. Time Off: Generous paid time off, parental leave policies, and a dedicated day off on the 3rd Friday of each month. If you are a visionary leader with a passion for building resilient data infrastructures, a deep understanding of revenue attribution and paid marketing channels, and a proven ability to drive strategic business outcomes through data, we invite you to join our Data & Analytics team and shape the future of our data warehousing function. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 weeks ago

Apply

13.0 years

0 Lacs

Agra, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: DBT, GCP, DWH, Data Modelling, Data Governance, data quality, data monitoring, Cost Management, Multi Cloud Forbes Advisor is Looking for: Company Description Forbes Advisor, part of the Forbes Marketplace family, provides consumers with expert-written insights, news, and reviews on personal finance, health, business, and everyday life decisions. We empower our audience with data-driven knowledge so they can make informed choices confidently—balancing the agility of a startup with the stability of a seasoned enterprise Role Overview The Senior Data Architect is a strategic, senior leadership role responsible for setting the vision and direction of our data warehousing function. You will architect, implement, and maintain a state-of-the-art data warehouse that drives actionable insights across revenue, subscriptions, paid marketing channels, and operational functions. Your leadership will ensure data quality, robust pipeline design, and seamless integration with business intelligence tools. This role requires a strong mix of technical acumen, team management, and cross-functional collaboration— especially with teams focused on SEM, Digital Experiences, and revenue attribution Job Description Key Responsibilities Strategic Data Architecture & Pipeline Leadership Vision & Strategy: ○Define and execute the long-term strategy for our data warehousing platform using medallion architecture (Bronze, Silver, Gold layers) and modern cloud-based solutions. End-to-End Pipeline Oversight: ○Oversee data ingestion (via Google Ads, Bing Ads, Facebook Ads, GA, APIs, SFTP, etc.), transformation (leveraging DBT, and SQL [via BigQuery]), and reporting, ensuring that our pipelines are robust and scalable. Data Modeling Best Practices: ○Champion best practices in data modeling, including the effective use of DBT packages to streamline complex transformations. Data Quality, Governance & Attribution Quality & Validation: ○Establish and enforce rigorous data quality standards, governance policies, and automated validation frameworks across all data streams. Standardization & Visibility: ○Collaborate with the Data Engineering, Insights and BIOps team to standardize data definitions (including engagement metrics and revenue attribution) and ensure consistency across all reports. Attribution Focus: ○Develop frameworks to reconcile revenue discrepancies and unify validation across Finance, SEM, and Analytics teams. ○Ensure accurate attribution of revenue and paid marketing channel performance, working closely with SEM and Digital Experiences teams. Monitoring & Alerting: ○Implement robust monitoring and alerting systems (e.g., Slack and email notifications) to quickly identify, diagnose, and resolve data pipeline issues. Team Leadership & Cross-Functional Collaboration People & Process: ○Lead, mentor, and grow a high-performing team of data warehousing specialists, fostering a culture of accountability, innovation, and continuous improvement. Stakeholder Engagement: ○Partner with RevOps, Analytics, SEM, Finance, and Product teams to align the data infrastructure with business objectives. ○Serve as the primary data warehouse expert in discussions around revenue attribution and paid marketing channel performance, ensuring that business requirements drive technical solutions. Communication: ○Translate complex technical concepts into clear business insights for both technical and non-technical stakeholders. Operational Excellence & Process Improvement Deployment & QA: ○Oversee deployment processes, including staging, QA, and rollback strategies, to ensure minimal disruption during updates. Continuous Optimization: ○Regularly assess and optimize data pipelines for performance, scalability, and reliability while reducing operational overhead. Legacy to Cloud Transition: ○Lead initiatives to transition from legacy on-premise systems to modern cloud-based architectures for improved agility and cost efficiency. Innovation & Thought Leadership Emerging Trends: ○Stay abreast of emerging trends and technologies in data warehousing, analytics, and cloud solutions. Pilot Projects: ○Propose and lead innovative projects to enhance our data capabilities, with a particular focus on predictive and prescriptive analytics. Executive Representation: ○Represent the data warehousing function in senior leadership discussions and strategic planning sessions Qualifications Education & Experience Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or a related field. 15+ years of experience in data engineering, warehousing, or analytics roles, with at least 5+ years in a leadership capacity. Proven track record in designing and implementing scalable data warehousing solutions in cloud environments. Technical Expertise Deep experience with medallion architecture and modern data pipeline tools, including DBT (and DBT packages), Databricks, SQL, and cloud-based data platforms. Strong understanding of ETL/ELT best practices, data modeling (logical and physical), and large-scale data processing. Hands-on experience with BI tools (e.g., Tableau, Looker) and familiarity with Google Analytics, and other tracking systems. Solid understanding of attribution models (first-touch, last-touch, multi- touch) and experience working with paid marketing channels. Leadership & Communication Excellent leadership and team management skills with the ability to mentor and inspire cross-functional teams. Outstanding communication skills, capable of distilling complex technical information into clear business insights. Demonstrated ability to lead strategic initiatives, manage competing priorities, and deliver results in a fast-paced environment. Perks & Benefits Flexible/Remote Working: Enjoy flexible work arrangements in a collaborative, distributed team culture. Competitive Compensation: Attractive salary, performance-based bonuses, and comprehensive benefits. Time Off: Generous paid time off, parental leave policies, and a dedicated day off on the 3rd Friday of each month. If you are a visionary leader with a passion for building resilient data infrastructures, a deep understanding of revenue attribution and paid marketing channels, and a proven ability to drive strategic business outcomes through data, we invite you to join our Data & Analytics team and shape the future of our data warehousing function. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 weeks ago

Apply

13.0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Linkedin logo

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: DBT, GCP, DWH, Data Modelling, Data Governance, data quality, data monitoring, Cost Management, Multi Cloud Forbes Advisor is Looking for: Company Description Forbes Advisor, part of the Forbes Marketplace family, provides consumers with expert-written insights, news, and reviews on personal finance, health, business, and everyday life decisions. We empower our audience with data-driven knowledge so they can make informed choices confidently—balancing the agility of a startup with the stability of a seasoned enterprise Role Overview The Senior Data Architect is a strategic, senior leadership role responsible for setting the vision and direction of our data warehousing function. You will architect, implement, and maintain a state-of-the-art data warehouse that drives actionable insights across revenue, subscriptions, paid marketing channels, and operational functions. Your leadership will ensure data quality, robust pipeline design, and seamless integration with business intelligence tools. This role requires a strong mix of technical acumen, team management, and cross-functional collaboration— especially with teams focused on SEM, Digital Experiences, and revenue attribution Job Description Key Responsibilities Strategic Data Architecture & Pipeline Leadership Vision & Strategy: ○Define and execute the long-term strategy for our data warehousing platform using medallion architecture (Bronze, Silver, Gold layers) and modern cloud-based solutions. End-to-End Pipeline Oversight: ○Oversee data ingestion (via Google Ads, Bing Ads, Facebook Ads, GA, APIs, SFTP, etc.), transformation (leveraging DBT, and SQL [via BigQuery]), and reporting, ensuring that our pipelines are robust and scalable. Data Modeling Best Practices: ○Champion best practices in data modeling, including the effective use of DBT packages to streamline complex transformations. Data Quality, Governance & Attribution Quality & Validation: ○Establish and enforce rigorous data quality standards, governance policies, and automated validation frameworks across all data streams. Standardization & Visibility: ○Collaborate with the Data Engineering, Insights and BIOps team to standardize data definitions (including engagement metrics and revenue attribution) and ensure consistency across all reports. Attribution Focus: ○Develop frameworks to reconcile revenue discrepancies and unify validation across Finance, SEM, and Analytics teams. ○Ensure accurate attribution of revenue and paid marketing channel performance, working closely with SEM and Digital Experiences teams. Monitoring & Alerting: ○Implement robust monitoring and alerting systems (e.g., Slack and email notifications) to quickly identify, diagnose, and resolve data pipeline issues. Team Leadership & Cross-Functional Collaboration People & Process: ○Lead, mentor, and grow a high-performing team of data warehousing specialists, fostering a culture of accountability, innovation, and continuous improvement. Stakeholder Engagement: ○Partner with RevOps, Analytics, SEM, Finance, and Product teams to align the data infrastructure with business objectives. ○Serve as the primary data warehouse expert in discussions around revenue attribution and paid marketing channel performance, ensuring that business requirements drive technical solutions. Communication: ○Translate complex technical concepts into clear business insights for both technical and non-technical stakeholders. Operational Excellence & Process Improvement Deployment & QA: ○Oversee deployment processes, including staging, QA, and rollback strategies, to ensure minimal disruption during updates. Continuous Optimization: ○Regularly assess and optimize data pipelines for performance, scalability, and reliability while reducing operational overhead. Legacy to Cloud Transition: ○Lead initiatives to transition from legacy on-premise systems to modern cloud-based architectures for improved agility and cost efficiency. Innovation & Thought Leadership Emerging Trends: ○Stay abreast of emerging trends and technologies in data warehousing, analytics, and cloud solutions. Pilot Projects: ○Propose and lead innovative projects to enhance our data capabilities, with a particular focus on predictive and prescriptive analytics. Executive Representation: ○Represent the data warehousing function in senior leadership discussions and strategic planning sessions Qualifications Education & Experience Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or a related field. 15+ years of experience in data engineering, warehousing, or analytics roles, with at least 5+ years in a leadership capacity. Proven track record in designing and implementing scalable data warehousing solutions in cloud environments. Technical Expertise Deep experience with medallion architecture and modern data pipeline tools, including DBT (and DBT packages), Databricks, SQL, and cloud-based data platforms. Strong understanding of ETL/ELT best practices, data modeling (logical and physical), and large-scale data processing. Hands-on experience with BI tools (e.g., Tableau, Looker) and familiarity with Google Analytics, and other tracking systems. Solid understanding of attribution models (first-touch, last-touch, multi- touch) and experience working with paid marketing channels. Leadership & Communication Excellent leadership and team management skills with the ability to mentor and inspire cross-functional teams. Outstanding communication skills, capable of distilling complex technical information into clear business insights. Demonstrated ability to lead strategic initiatives, manage competing priorities, and deliver results in a fast-paced environment. Perks & Benefits Flexible/Remote Working: Enjoy flexible work arrangements in a collaborative, distributed team culture. Competitive Compensation: Attractive salary, performance-based bonuses, and comprehensive benefits. Time Off: Generous paid time off, parental leave policies, and a dedicated day off on the 3rd Friday of each month. If you are a visionary leader with a passion for building resilient data infrastructures, a deep understanding of revenue attribution and paid marketing channels, and a proven ability to drive strategic business outcomes through data, we invite you to join our Data & Analytics team and shape the future of our data warehousing function. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 weeks ago

Apply

13.0 years

0 Lacs

Greater Lucknow Area

Remote

Linkedin logo

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: DBT, GCP, DWH, Data Modelling, Data Governance, data quality, data monitoring, Cost Management, Multi Cloud Forbes Advisor is Looking for: Company Description Forbes Advisor, part of the Forbes Marketplace family, provides consumers with expert-written insights, news, and reviews on personal finance, health, business, and everyday life decisions. We empower our audience with data-driven knowledge so they can make informed choices confidently—balancing the agility of a startup with the stability of a seasoned enterprise Role Overview The Senior Data Architect is a strategic, senior leadership role responsible for setting the vision and direction of our data warehousing function. You will architect, implement, and maintain a state-of-the-art data warehouse that drives actionable insights across revenue, subscriptions, paid marketing channels, and operational functions. Your leadership will ensure data quality, robust pipeline design, and seamless integration with business intelligence tools. This role requires a strong mix of technical acumen, team management, and cross-functional collaboration— especially with teams focused on SEM, Digital Experiences, and revenue attribution Job Description Key Responsibilities Strategic Data Architecture & Pipeline Leadership Vision & Strategy: ○Define and execute the long-term strategy for our data warehousing platform using medallion architecture (Bronze, Silver, Gold layers) and modern cloud-based solutions. End-to-End Pipeline Oversight: ○Oversee data ingestion (via Google Ads, Bing Ads, Facebook Ads, GA, APIs, SFTP, etc.), transformation (leveraging DBT, and SQL [via BigQuery]), and reporting, ensuring that our pipelines are robust and scalable. Data Modeling Best Practices: ○Champion best practices in data modeling, including the effective use of DBT packages to streamline complex transformations. Data Quality, Governance & Attribution Quality & Validation: ○Establish and enforce rigorous data quality standards, governance policies, and automated validation frameworks across all data streams. Standardization & Visibility: ○Collaborate with the Data Engineering, Insights and BIOps team to standardize data definitions (including engagement metrics and revenue attribution) and ensure consistency across all reports. Attribution Focus: ○Develop frameworks to reconcile revenue discrepancies and unify validation across Finance, SEM, and Analytics teams. ○Ensure accurate attribution of revenue and paid marketing channel performance, working closely with SEM and Digital Experiences teams. Monitoring & Alerting: ○Implement robust monitoring and alerting systems (e.g., Slack and email notifications) to quickly identify, diagnose, and resolve data pipeline issues. Team Leadership & Cross-Functional Collaboration People & Process: ○Lead, mentor, and grow a high-performing team of data warehousing specialists, fostering a culture of accountability, innovation, and continuous improvement. Stakeholder Engagement: ○Partner with RevOps, Analytics, SEM, Finance, and Product teams to align the data infrastructure with business objectives. ○Serve as the primary data warehouse expert in discussions around revenue attribution and paid marketing channel performance, ensuring that business requirements drive technical solutions. Communication: ○Translate complex technical concepts into clear business insights for both technical and non-technical stakeholders. Operational Excellence & Process Improvement Deployment & QA: ○Oversee deployment processes, including staging, QA, and rollback strategies, to ensure minimal disruption during updates. Continuous Optimization: ○Regularly assess and optimize data pipelines for performance, scalability, and reliability while reducing operational overhead. Legacy to Cloud Transition: ○Lead initiatives to transition from legacy on-premise systems to modern cloud-based architectures for improved agility and cost efficiency. Innovation & Thought Leadership Emerging Trends: ○Stay abreast of emerging trends and technologies in data warehousing, analytics, and cloud solutions. Pilot Projects: ○Propose and lead innovative projects to enhance our data capabilities, with a particular focus on predictive and prescriptive analytics. Executive Representation: ○Represent the data warehousing function in senior leadership discussions and strategic planning sessions Qualifications Education & Experience Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or a related field. 15+ years of experience in data engineering, warehousing, or analytics roles, with at least 5+ years in a leadership capacity. Proven track record in designing and implementing scalable data warehousing solutions in cloud environments. Technical Expertise Deep experience with medallion architecture and modern data pipeline tools, including DBT (and DBT packages), Databricks, SQL, and cloud-based data platforms. Strong understanding of ETL/ELT best practices, data modeling (logical and physical), and large-scale data processing. Hands-on experience with BI tools (e.g., Tableau, Looker) and familiarity with Google Analytics, and other tracking systems. Solid understanding of attribution models (first-touch, last-touch, multi- touch) and experience working with paid marketing channels. Leadership & Communication Excellent leadership and team management skills with the ability to mentor and inspire cross-functional teams. Outstanding communication skills, capable of distilling complex technical information into clear business insights. Demonstrated ability to lead strategic initiatives, manage competing priorities, and deliver results in a fast-paced environment. Perks & Benefits Flexible/Remote Working: Enjoy flexible work arrangements in a collaborative, distributed team culture. Competitive Compensation: Attractive salary, performance-based bonuses, and comprehensive benefits. Time Off: Generous paid time off, parental leave policies, and a dedicated day off on the 3rd Friday of each month. If you are a visionary leader with a passion for building resilient data infrastructures, a deep understanding of revenue attribution and paid marketing channels, and a proven ability to drive strategic business outcomes through data, we invite you to join our Data & Analytics team and shape the future of our data warehousing function. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 weeks ago

Apply

13.0 years

0 Lacs

Thane, Maharashtra, India

Remote

Linkedin logo

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: DBT, GCP, DWH, Data Modelling, Data Governance, data quality, data monitoring, Cost Management, Multi Cloud Forbes Advisor is Looking for: Company Description Forbes Advisor, part of the Forbes Marketplace family, provides consumers with expert-written insights, news, and reviews on personal finance, health, business, and everyday life decisions. We empower our audience with data-driven knowledge so they can make informed choices confidently—balancing the agility of a startup with the stability of a seasoned enterprise Role Overview The Senior Data Architect is a strategic, senior leadership role responsible for setting the vision and direction of our data warehousing function. You will architect, implement, and maintain a state-of-the-art data warehouse that drives actionable insights across revenue, subscriptions, paid marketing channels, and operational functions. Your leadership will ensure data quality, robust pipeline design, and seamless integration with business intelligence tools. This role requires a strong mix of technical acumen, team management, and cross-functional collaboration— especially with teams focused on SEM, Digital Experiences, and revenue attribution Job Description Key Responsibilities Strategic Data Architecture & Pipeline Leadership Vision & Strategy: ○Define and execute the long-term strategy for our data warehousing platform using medallion architecture (Bronze, Silver, Gold layers) and modern cloud-based solutions. End-to-End Pipeline Oversight: ○Oversee data ingestion (via Google Ads, Bing Ads, Facebook Ads, GA, APIs, SFTP, etc.), transformation (leveraging DBT, and SQL [via BigQuery]), and reporting, ensuring that our pipelines are robust and scalable. Data Modeling Best Practices: ○Champion best practices in data modeling, including the effective use of DBT packages to streamline complex transformations. Data Quality, Governance & Attribution Quality & Validation: ○Establish and enforce rigorous data quality standards, governance policies, and automated validation frameworks across all data streams. Standardization & Visibility: ○Collaborate with the Data Engineering, Insights and BIOps team to standardize data definitions (including engagement metrics and revenue attribution) and ensure consistency across all reports. Attribution Focus: ○Develop frameworks to reconcile revenue discrepancies and unify validation across Finance, SEM, and Analytics teams. ○Ensure accurate attribution of revenue and paid marketing channel performance, working closely with SEM and Digital Experiences teams. Monitoring & Alerting: ○Implement robust monitoring and alerting systems (e.g., Slack and email notifications) to quickly identify, diagnose, and resolve data pipeline issues. Team Leadership & Cross-Functional Collaboration People & Process: ○Lead, mentor, and grow a high-performing team of data warehousing specialists, fostering a culture of accountability, innovation, and continuous improvement. Stakeholder Engagement: ○Partner with RevOps, Analytics, SEM, Finance, and Product teams to align the data infrastructure with business objectives. ○Serve as the primary data warehouse expert in discussions around revenue attribution and paid marketing channel performance, ensuring that business requirements drive technical solutions. Communication: ○Translate complex technical concepts into clear business insights for both technical and non-technical stakeholders. Operational Excellence & Process Improvement Deployment & QA: ○Oversee deployment processes, including staging, QA, and rollback strategies, to ensure minimal disruption during updates. Continuous Optimization: ○Regularly assess and optimize data pipelines for performance, scalability, and reliability while reducing operational overhead. Legacy to Cloud Transition: ○Lead initiatives to transition from legacy on-premise systems to modern cloud-based architectures for improved agility and cost efficiency. Innovation & Thought Leadership Emerging Trends: ○Stay abreast of emerging trends and technologies in data warehousing, analytics, and cloud solutions. Pilot Projects: ○Propose and lead innovative projects to enhance our data capabilities, with a particular focus on predictive and prescriptive analytics. Executive Representation: ○Represent the data warehousing function in senior leadership discussions and strategic planning sessions Qualifications Education & Experience Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or a related field. 15+ years of experience in data engineering, warehousing, or analytics roles, with at least 5+ years in a leadership capacity. Proven track record in designing and implementing scalable data warehousing solutions in cloud environments. Technical Expertise Deep experience with medallion architecture and modern data pipeline tools, including DBT (and DBT packages), Databricks, SQL, and cloud-based data platforms. Strong understanding of ETL/ELT best practices, data modeling (logical and physical), and large-scale data processing. Hands-on experience with BI tools (e.g., Tableau, Looker) and familiarity with Google Analytics, and other tracking systems. Solid understanding of attribution models (first-touch, last-touch, multi- touch) and experience working with paid marketing channels. Leadership & Communication Excellent leadership and team management skills with the ability to mentor and inspire cross-functional teams. Outstanding communication skills, capable of distilling complex technical information into clear business insights. Demonstrated ability to lead strategic initiatives, manage competing priorities, and deliver results in a fast-paced environment. Perks & Benefits Flexible/Remote Working: Enjoy flexible work arrangements in a collaborative, distributed team culture. Competitive Compensation: Attractive salary, performance-based bonuses, and comprehensive benefits. Time Off: Generous paid time off, parental leave policies, and a dedicated day off on the 3rd Friday of each month. If you are a visionary leader with a passion for building resilient data infrastructures, a deep understanding of revenue attribution and paid marketing channels, and a proven ability to drive strategic business outcomes through data, we invite you to join our Data & Analytics team and shape the future of our data warehousing function. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 weeks ago

Apply

13.0 years

0 Lacs

Nagpur, Maharashtra, India

Remote

Linkedin logo

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: DBT, GCP, DWH, Data Modelling, Data Governance, data quality, data monitoring, Cost Management, Multi Cloud Forbes Advisor is Looking for: Company Description Forbes Advisor, part of the Forbes Marketplace family, provides consumers with expert-written insights, news, and reviews on personal finance, health, business, and everyday life decisions. We empower our audience with data-driven knowledge so they can make informed choices confidently—balancing the agility of a startup with the stability of a seasoned enterprise Role Overview The Senior Data Architect is a strategic, senior leadership role responsible for setting the vision and direction of our data warehousing function. You will architect, implement, and maintain a state-of-the-art data warehouse that drives actionable insights across revenue, subscriptions, paid marketing channels, and operational functions. Your leadership will ensure data quality, robust pipeline design, and seamless integration with business intelligence tools. This role requires a strong mix of technical acumen, team management, and cross-functional collaboration— especially with teams focused on SEM, Digital Experiences, and revenue attribution Job Description Key Responsibilities Strategic Data Architecture & Pipeline Leadership Vision & Strategy: ○Define and execute the long-term strategy for our data warehousing platform using medallion architecture (Bronze, Silver, Gold layers) and modern cloud-based solutions. End-to-End Pipeline Oversight: ○Oversee data ingestion (via Google Ads, Bing Ads, Facebook Ads, GA, APIs, SFTP, etc.), transformation (leveraging DBT, and SQL [via BigQuery]), and reporting, ensuring that our pipelines are robust and scalable. Data Modeling Best Practices: ○Champion best practices in data modeling, including the effective use of DBT packages to streamline complex transformations. Data Quality, Governance & Attribution Quality & Validation: ○Establish and enforce rigorous data quality standards, governance policies, and automated validation frameworks across all data streams. Standardization & Visibility: ○Collaborate with the Data Engineering, Insights and BIOps team to standardize data definitions (including engagement metrics and revenue attribution) and ensure consistency across all reports. Attribution Focus: ○Develop frameworks to reconcile revenue discrepancies and unify validation across Finance, SEM, and Analytics teams. ○Ensure accurate attribution of revenue and paid marketing channel performance, working closely with SEM and Digital Experiences teams. Monitoring & Alerting: ○Implement robust monitoring and alerting systems (e.g., Slack and email notifications) to quickly identify, diagnose, and resolve data pipeline issues. Team Leadership & Cross-Functional Collaboration People & Process: ○Lead, mentor, and grow a high-performing team of data warehousing specialists, fostering a culture of accountability, innovation, and continuous improvement. Stakeholder Engagement: ○Partner with RevOps, Analytics, SEM, Finance, and Product teams to align the data infrastructure with business objectives. ○Serve as the primary data warehouse expert in discussions around revenue attribution and paid marketing channel performance, ensuring that business requirements drive technical solutions. Communication: ○Translate complex technical concepts into clear business insights for both technical and non-technical stakeholders. Operational Excellence & Process Improvement Deployment & QA: ○Oversee deployment processes, including staging, QA, and rollback strategies, to ensure minimal disruption during updates. Continuous Optimization: ○Regularly assess and optimize data pipelines for performance, scalability, and reliability while reducing operational overhead. Legacy to Cloud Transition: ○Lead initiatives to transition from legacy on-premise systems to modern cloud-based architectures for improved agility and cost efficiency. Innovation & Thought Leadership Emerging Trends: ○Stay abreast of emerging trends and technologies in data warehousing, analytics, and cloud solutions. Pilot Projects: ○Propose and lead innovative projects to enhance our data capabilities, with a particular focus on predictive and prescriptive analytics. Executive Representation: ○Represent the data warehousing function in senior leadership discussions and strategic planning sessions Qualifications Education & Experience Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or a related field. 15+ years of experience in data engineering, warehousing, or analytics roles, with at least 5+ years in a leadership capacity. Proven track record in designing and implementing scalable data warehousing solutions in cloud environments. Technical Expertise Deep experience with medallion architecture and modern data pipeline tools, including DBT (and DBT packages), Databricks, SQL, and cloud-based data platforms. Strong understanding of ETL/ELT best practices, data modeling (logical and physical), and large-scale data processing. Hands-on experience with BI tools (e.g., Tableau, Looker) and familiarity with Google Analytics, and other tracking systems. Solid understanding of attribution models (first-touch, last-touch, multi- touch) and experience working with paid marketing channels. Leadership & Communication Excellent leadership and team management skills with the ability to mentor and inspire cross-functional teams. Outstanding communication skills, capable of distilling complex technical information into clear business insights. Demonstrated ability to lead strategic initiatives, manage competing priorities, and deliver results in a fast-paced environment. Perks & Benefits Flexible/Remote Working: Enjoy flexible work arrangements in a collaborative, distributed team culture. Competitive Compensation: Attractive salary, performance-based bonuses, and comprehensive benefits. Time Off: Generous paid time off, parental leave policies, and a dedicated day off on the 3rd Friday of each month. If you are a visionary leader with a passion for building resilient data infrastructures, a deep understanding of revenue attribution and paid marketing channels, and a proven ability to drive strategic business outcomes through data, we invite you to join our Data & Analytics team and shape the future of our data warehousing function. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 weeks ago

Apply

13.0 years

0 Lacs

Nashik, Maharashtra, India

Remote

Linkedin logo

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: DBT, GCP, DWH, Data Modelling, Data Governance, data quality, data monitoring, Cost Management, Multi Cloud Forbes Advisor is Looking for: Company Description Forbes Advisor, part of the Forbes Marketplace family, provides consumers with expert-written insights, news, and reviews on personal finance, health, business, and everyday life decisions. We empower our audience with data-driven knowledge so they can make informed choices confidently—balancing the agility of a startup with the stability of a seasoned enterprise Role Overview The Senior Data Architect is a strategic, senior leadership role responsible for setting the vision and direction of our data warehousing function. You will architect, implement, and maintain a state-of-the-art data warehouse that drives actionable insights across revenue, subscriptions, paid marketing channels, and operational functions. Your leadership will ensure data quality, robust pipeline design, and seamless integration with business intelligence tools. This role requires a strong mix of technical acumen, team management, and cross-functional collaboration— especially with teams focused on SEM, Digital Experiences, and revenue attribution Job Description Key Responsibilities Strategic Data Architecture & Pipeline Leadership Vision & Strategy: ○Define and execute the long-term strategy for our data warehousing platform using medallion architecture (Bronze, Silver, Gold layers) and modern cloud-based solutions. End-to-End Pipeline Oversight: ○Oversee data ingestion (via Google Ads, Bing Ads, Facebook Ads, GA, APIs, SFTP, etc.), transformation (leveraging DBT, and SQL [via BigQuery]), and reporting, ensuring that our pipelines are robust and scalable. Data Modeling Best Practices: ○Champion best practices in data modeling, including the effective use of DBT packages to streamline complex transformations. Data Quality, Governance & Attribution Quality & Validation: ○Establish and enforce rigorous data quality standards, governance policies, and automated validation frameworks across all data streams. Standardization & Visibility: ○Collaborate with the Data Engineering, Insights and BIOps team to standardize data definitions (including engagement metrics and revenue attribution) and ensure consistency across all reports. Attribution Focus: ○Develop frameworks to reconcile revenue discrepancies and unify validation across Finance, SEM, and Analytics teams. ○Ensure accurate attribution of revenue and paid marketing channel performance, working closely with SEM and Digital Experiences teams. Monitoring & Alerting: ○Implement robust monitoring and alerting systems (e.g., Slack and email notifications) to quickly identify, diagnose, and resolve data pipeline issues. Team Leadership & Cross-Functional Collaboration People & Process: ○Lead, mentor, and grow a high-performing team of data warehousing specialists, fostering a culture of accountability, innovation, and continuous improvement. Stakeholder Engagement: ○Partner with RevOps, Analytics, SEM, Finance, and Product teams to align the data infrastructure with business objectives. ○Serve as the primary data warehouse expert in discussions around revenue attribution and paid marketing channel performance, ensuring that business requirements drive technical solutions. Communication: ○Translate complex technical concepts into clear business insights for both technical and non-technical stakeholders. Operational Excellence & Process Improvement Deployment & QA: ○Oversee deployment processes, including staging, QA, and rollback strategies, to ensure minimal disruption during updates. Continuous Optimization: ○Regularly assess and optimize data pipelines for performance, scalability, and reliability while reducing operational overhead. Legacy to Cloud Transition: ○Lead initiatives to transition from legacy on-premise systems to modern cloud-based architectures for improved agility and cost efficiency. Innovation & Thought Leadership Emerging Trends: ○Stay abreast of emerging trends and technologies in data warehousing, analytics, and cloud solutions. Pilot Projects: ○Propose and lead innovative projects to enhance our data capabilities, with a particular focus on predictive and prescriptive analytics. Executive Representation: ○Represent the data warehousing function in senior leadership discussions and strategic planning sessions Qualifications Education & Experience Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or a related field. 15+ years of experience in data engineering, warehousing, or analytics roles, with at least 5+ years in a leadership capacity. Proven track record in designing and implementing scalable data warehousing solutions in cloud environments. Technical Expertise Deep experience with medallion architecture and modern data pipeline tools, including DBT (and DBT packages), Databricks, SQL, and cloud-based data platforms. Strong understanding of ETL/ELT best practices, data modeling (logical and physical), and large-scale data processing. Hands-on experience with BI tools (e.g., Tableau, Looker) and familiarity with Google Analytics, and other tracking systems. Solid understanding of attribution models (first-touch, last-touch, multi- touch) and experience working with paid marketing channels. Leadership & Communication Excellent leadership and team management skills with the ability to mentor and inspire cross-functional teams. Outstanding communication skills, capable of distilling complex technical information into clear business insights. Demonstrated ability to lead strategic initiatives, manage competing priorities, and deliver results in a fast-paced environment. Perks & Benefits Flexible/Remote Working: Enjoy flexible work arrangements in a collaborative, distributed team culture. Competitive Compensation: Attractive salary, performance-based bonuses, and comprehensive benefits. Time Off: Generous paid time off, parental leave policies, and a dedicated day off on the 3rd Friday of each month. If you are a visionary leader with a passion for building resilient data infrastructures, a deep understanding of revenue attribution and paid marketing channels, and a proven ability to drive strategic business outcomes through data, we invite you to join our Data & Analytics team and shape the future of our data warehousing function. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 weeks ago

Apply

13.0 years

0 Lacs

Kanpur, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: DBT, GCP, DWH, Data Modelling, Data Governance, data quality, data monitoring, Cost Management, Multi Cloud Forbes Advisor is Looking for: Company Description Forbes Advisor, part of the Forbes Marketplace family, provides consumers with expert-written insights, news, and reviews on personal finance, health, business, and everyday life decisions. We empower our audience with data-driven knowledge so they can make informed choices confidently—balancing the agility of a startup with the stability of a seasoned enterprise Role Overview The Senior Data Architect is a strategic, senior leadership role responsible for setting the vision and direction of our data warehousing function. You will architect, implement, and maintain a state-of-the-art data warehouse that drives actionable insights across revenue, subscriptions, paid marketing channels, and operational functions. Your leadership will ensure data quality, robust pipeline design, and seamless integration with business intelligence tools. This role requires a strong mix of technical acumen, team management, and cross-functional collaboration— especially with teams focused on SEM, Digital Experiences, and revenue attribution Job Description Key Responsibilities Strategic Data Architecture & Pipeline Leadership Vision & Strategy: ○Define and execute the long-term strategy for our data warehousing platform using medallion architecture (Bronze, Silver, Gold layers) and modern cloud-based solutions. End-to-End Pipeline Oversight: ○Oversee data ingestion (via Google Ads, Bing Ads, Facebook Ads, GA, APIs, SFTP, etc.), transformation (leveraging DBT, and SQL [via BigQuery]), and reporting, ensuring that our pipelines are robust and scalable. Data Modeling Best Practices: ○Champion best practices in data modeling, including the effective use of DBT packages to streamline complex transformations. Data Quality, Governance & Attribution Quality & Validation: ○Establish and enforce rigorous data quality standards, governance policies, and automated validation frameworks across all data streams. Standardization & Visibility: ○Collaborate with the Data Engineering, Insights and BIOps team to standardize data definitions (including engagement metrics and revenue attribution) and ensure consistency across all reports. Attribution Focus: ○Develop frameworks to reconcile revenue discrepancies and unify validation across Finance, SEM, and Analytics teams. ○Ensure accurate attribution of revenue and paid marketing channel performance, working closely with SEM and Digital Experiences teams. Monitoring & Alerting: ○Implement robust monitoring and alerting systems (e.g., Slack and email notifications) to quickly identify, diagnose, and resolve data pipeline issues. Team Leadership & Cross-Functional Collaboration People & Process: ○Lead, mentor, and grow a high-performing team of data warehousing specialists, fostering a culture of accountability, innovation, and continuous improvement. Stakeholder Engagement: ○Partner with RevOps, Analytics, SEM, Finance, and Product teams to align the data infrastructure with business objectives. ○Serve as the primary data warehouse expert in discussions around revenue attribution and paid marketing channel performance, ensuring that business requirements drive technical solutions. Communication: ○Translate complex technical concepts into clear business insights for both technical and non-technical stakeholders. Operational Excellence & Process Improvement Deployment & QA: ○Oversee deployment processes, including staging, QA, and rollback strategies, to ensure minimal disruption during updates. Continuous Optimization: ○Regularly assess and optimize data pipelines for performance, scalability, and reliability while reducing operational overhead. Legacy to Cloud Transition: ○Lead initiatives to transition from legacy on-premise systems to modern cloud-based architectures for improved agility and cost efficiency. Innovation & Thought Leadership Emerging Trends: ○Stay abreast of emerging trends and technologies in data warehousing, analytics, and cloud solutions. Pilot Projects: ○Propose and lead innovative projects to enhance our data capabilities, with a particular focus on predictive and prescriptive analytics. Executive Representation: ○Represent the data warehousing function in senior leadership discussions and strategic planning sessions Qualifications Education & Experience Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or a related field. 15+ years of experience in data engineering, warehousing, or analytics roles, with at least 5+ years in a leadership capacity. Proven track record in designing and implementing scalable data warehousing solutions in cloud environments. Technical Expertise Deep experience with medallion architecture and modern data pipeline tools, including DBT (and DBT packages), Databricks, SQL, and cloud-based data platforms. Strong understanding of ETL/ELT best practices, data modeling (logical and physical), and large-scale data processing. Hands-on experience with BI tools (e.g., Tableau, Looker) and familiarity with Google Analytics, and other tracking systems. Solid understanding of attribution models (first-touch, last-touch, multi- touch) and experience working with paid marketing channels. Leadership & Communication Excellent leadership and team management skills with the ability to mentor and inspire cross-functional teams. Outstanding communication skills, capable of distilling complex technical information into clear business insights. Demonstrated ability to lead strategic initiatives, manage competing priorities, and deliver results in a fast-paced environment. Perks & Benefits Flexible/Remote Working: Enjoy flexible work arrangements in a collaborative, distributed team culture. Competitive Compensation: Attractive salary, performance-based bonuses, and comprehensive benefits. Time Off: Generous paid time off, parental leave policies, and a dedicated day off on the 3rd Friday of each month. If you are a visionary leader with a passion for building resilient data infrastructures, a deep understanding of revenue attribution and paid marketing channels, and a proven ability to drive strategic business outcomes through data, we invite you to join our Data & Analytics team and shape the future of our data warehousing function. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

India

Remote

Linkedin logo

NeuraTalk, an AI company backed by Singapore investors, is seeking a Python MLOps Engineer to join our founding team. Opportunity For Experienced candidates only. This role starts as a 3-month provisional period with a fixed pay of ₹25,000 per month. Upon successful completion, candidates will be offered a full-time role with a package of ₹6 LPA. - Annual Bonus Extra Role Overview Design, build, and maintain advanced ML Ops pipelines for cutting-edge AI models, focusing on agentic RAG, LangChain integration, fine-tuning, and scalable deployment. Open for experienced candidates passionate about ML infrastructure and automation. Technical Requirements Strong Python programming skills (Required) Experience with advanced Retrieval-Augmented Generation (RAG) systems and agentic AI design Proficiency with LangChain framework for building LLM applications Experience in fine-tuning large language models on domain-specific data Skilled in building end-to-end ML pipelines for training, validation, and deployment Familiarity with Docker, Kubernetes, and cloud platforms (AWS, GCP, or Azure) Experience with CI/CD for ML models and monitoring model performance post-deployment Understanding of vector databases and embedding techniques (e.g., FAISS, Pinecone) Knowledge of model serving frameworks (e.g., FastAPI, TorchServe) Experience with experiment tracking and hyperparameter tuning tools (e.g., MLflow, Weights & Biases) Basic understanding of NLP and transformer architectures Responsibilities Develop and maintain scalable MLOps pipelines for advanced AI solutions Build and optimize agentic RAG workflows integrating LangChain and other LLM tools Fine-tune models on specialized datasets to improve accuracy and relevance Automate model deployment and monitoring to ensure high availability and performance Collaborate closely with data scientists and ML researchers to operationalize models Implement logging, alerting, and performance tracking for production models Contribute to infrastructure design for seamless model updates and rollback For Experienced Developers 2+ years in Python-based MLOps or ML Engineering roles Hands-on experience with LangChain or similar agentic AI frameworks Strong cloud deployment and container orchestration skills Proven track record of delivering production ML pipelines Ability to work independently and as part of a cross-functional team Benefits Remote work flexibility Early-stage equity (ESOP) Direct collaboration with the founding AI research and product team Opportunities for growth and leadership in AI infrastructure Future Singapore relocation opportunity Continuous learning and innovation-driven environment Work Culture Innovation-driven AI startup Remote-first with flexible working hours Transparent communication and team-oriented Direct impact on cutting-edge AI product development Equal opportunity employer promoting diversity and inclusion #AI #MLOps #Python #LangChain #RAG #MLDeployment #RemoteWork #Startup #India #Singapore Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Charles Technologies is a dynamic startup based in Chennai, dedicated to creating innovative mobile applications that transform user experiences. We are looking for a talented and experienced MERN Stack Developer to join our team and lead the development of innovative web and mobile applications. Qualifications: Education: BE in Computer Science, Information Technology, or B.Tech in an IT-related field is required. A Master’s degree is a plus. Relevant certifications are also a plus. Experience: Minimum of 2 years of total experience in full stack application development. Extensive experience working with startups, small teams, and in fast-paced environments is highly desirable. Foundational Knowledge: Strong understanding of software engineering principles, product development, and web/mobile application development best practices. Technical Skills: JavaScript​ : Expert-level proficiency in JavaScript, including ES6+ features, asynchronous programming, and modern frameworks .React Native : Extensive experience in developing cross-platform mobile applications using React Native, including performance optimization and native module integration React : Advanced expertise in React for front-end development, including hooks, context API, state management libraries like Redux, and component lifecycle management Node.js : Solid knowledge of Node.js for backend development, including experience with Express.js, RESTful API design, and asynchronous programming patterns Azure Cosmos DB : Extensive experience with Azure Cosmos DB for scalable and efficient data management, including partitioning, indexing, querying, and performance tuning Azure Cloud Services : Proficiency in deploying and managing applications on Azure Cloud Services, including Azure App Services, Azure Functions, Azure Storage, and monitoring tools Git : Proficient in version control systems like Git, including branching, merging strategies, pull request workflows, and conflict resolution Azure DevOps : Experience with Azure DevOps for CI/CD pipelines, project management, automated testing, and release management API Integration : Experience in integrating RESTful APIs and third-party services, including OAuth, JWT, and other authentication and authorization mechanisms UI/UX Design : Understanding of UI/UX design principles and ability to collaborate with designers to implement responsive, accessible, and user-friendly interfaces Responsibilities Full Stack Development : Develop and maintain high-quality web and mobile applications using React Native, React, and Node.js, ensuring code quality, performance, and scalability Backend Development : Implement backend services and APIs using Node.js, ensuring scalability, security, and maintainability Database Management : Manage and optimize databases using Azure Cosmos DB, including data modelling, indexing, partitioning, and performance tuning .Version Control : Use Git for version control, including branching, merging, and pull request workflows. Conduct peer code reviews to ensure code quality and share knowledge with team members CI/CD Pipelines : Set up and maintain CI/CD pipelines using Azure DevOps, including automated testing, deployment, monitoring, and rollback strategies Peer Code Reviews : Participate in peer code reviews to ensure adherence to coding standards, identify potential issues, and share best practices Performance Optimization : Optimize application performance and ensure responsiveness across different devices and platforms, including profiling, debugging, and performance tuning Collaboration : Work closely with designers, product owners, and other developers to deliver high-quality applications. Participate in agile development processes, including sprint planning, stand-ups, and retrospectives Testing and Debugging : Conduct thorough testing and debugging to ensure the reliability and stability of applications, including unit testing, integration testing, and end-to-end testing Documentation : Create and maintain comprehensive documentation for code, APIs, and development processes, including technical specifications and user guides Continuous Improvement : Stay updated with the latest industry trends and technologies, and continuously improve development practices. Participate in knowledge-sharing sessions and contribute to the growth of the team Perks & Benefits Central Location : Conveniently located in the heart of the city, with parking facilities and well-served by public transport including buses and Chennai Metro Meals and Refreshments : Lunch, tea/coffee, snacks, and refreshments provided throughout the day Insurance : TATA AIG Family Group Insurance for INR 5.0 Lakhs (Coverage: Self + Spouse + Up to 3 Children) Professional Development : Opportunities for continuous learning and growth Team Outings and Events : Regular team-building activities and events Employee Recognition : Programs to acknowledge and reward outstanding performance How to Apply : Interested candidates can apply through LinkedIn or email us at careers@charles-technologies.com. Join us at Charles Technologies and be a part of a team that is shaping the future of mobile applications! Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Responsibilities Administer, configure, and maintain Microsoft SQL Server (on-premises) and AWS RDS/Redshift environments. Perform regular maintenance tasks such as backups, restores, patching, and capacity planning. Manage database security, user access, and roles across environments. Provision, configure, and manage AWS RDS (SQL Server, PostgreSQL) and Redshift instances. Implement backup strategies, monitoring, and disaster recovery solutions in the cloud. Automate routine database tasks and processes using AWS tools and scripting. Deploy and monitor AWS Glue and AWS Lambda. Troubleshoot ETL job failures, ensure data quality, and support timely delivery of data. Use tools like AWS CloudWatch, SolarWinds, and Redgate SQL Monitor for real-time performance tracking and alerting. Identify and resolve performance bottlenecks in SQL queries, indexes, and server configurations. Act as a point of contact for database-related incidents and outages. Perform root cause analysis, document findings, and work with engineering teams to implement long-term fixes. Maintain comprehensive and up-to-date documentation on database systems, configurations, and procedures. Collaborate with development and DevOps teams to support database and data platform needs. Contribute to automation and infrastructure improvements in cloud and hybrid environments. Maintain detailed documentation and knowledge base articles for internal : Experience as a database administrator, with a strong foundation in SQL Server administration, backup/restore strategies, and high availability solutions (e.g., Always On, clustering). Hands-on experience managing AWS RDS (SQL Server/PostgreSQL) and Amazon Redshift, including provisioning, scaling, backups, snapshots, and security configurations. Proficiency with monitoring tools like AWS CloudWatch, SolarWinds, and Redgate SQL Monitor, with the ability to configure alerts, identify trends, and proactively address performance bottlenecks. Expertise in performance tuning for : SQL Server : Execution plan analysis, indexing strategies, TempDB optimization, query tuning. RDS : Parameter group tuning, performance insights, instance sizing. Redshift : WLM configuration, vacuum/analyze, distribution/sort keys, and query optimization. Strong understanding of database security best practices, user access controls, encryption, and auditing. Experience managing incident response, including root cause analysis, mitigation planning, and follow-up documentation. Ability to create and maintain detailed runbooks, SOPs, and knowledge base articles for repeatable processes and troubleshooting procedures. Comfortable working in hybrid environments, with coordination across on-premises and cloud-based systems. Familiarity with automation and scripting using PowerShell, Python, or Bash to streamline database tasks and monitoring. Hands-on experience with CI/CD pipelines to support database changes and deployments using tools like AWS CodePipeline or GitLab CI. Experience integrating database deployments into DevOps pipelines, including version-controlled DDL/DML scripts, pre-deployment checks, and rollback strategies. Ability to perform manual deployments when required (e.g., via SSMS, pgAdmin, or SQL scripts) while adhering to change management processes. Ability to work independently, manage priorities, and take ownership of tasks in a distributed team environment. Strong communication and interpersonal skills, with the ability to explain technical concepts clearly to both technical and non-technical stakeholders. A proactive and detail-oriented mindset, with a focus on continuous improvement and system reliability. (ref:hirist.tech) Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Secunderābād, Telangana, India

On-site

Linkedin logo

About Us JOB DESCRIPTION SBI Card is a leading pure-play credit card issuer in India, offering a wide range of credit cards to cater to diverse customer needs. We are constantly innovating to meet the evolving financial needs of our customers, empowering them with digital currency for seamless payment experience and indulge in rewarding benefits. At SBI Card, the motto 'Make Life Simple' inspires every initiative, ensuring that customer convenience is at the forefront of all that we do. We are committed to building an environment where people can thrive and create a better future for everyone. SBI Card is proud to be an equal opportunity & inclusive employer and welcome employees without any discrimination on the grounds of race, color, gender, religion, creed, disability, sexual orientation, gender identity, marital status, caste etc. SBI Card is committed to fostering an inclusive and diverse workplace where all employees are treated equally with dignity and respect which makes it a promising place to work. Join us to shape the future of digital payment in India and unlock your full potential. What’s In It For YOU SBI Card truly lives by the work-life balance philosophy. We offer a robust wellness and wellbeing program to support mental and physical health of our employees Admirable work deserves to be rewarded. We have a well curated bouquet of rewards and recognition program for the employees Dynamic, Inclusive and Diverse team culture Gender Neutral Policy Inclusive Health Benefits for all - Medical Insurance, Personal Accidental, Group Term Life Insurance and Annual Health Checkup, Dental and OPD benefits Commitment to the overall development of an employee through comprehensive learning & development framework Role Purpose Responsible for the management of all collections processes for allocated portfolio in the assigned CD/Area basis targets set for resolution, normalization, rollback/absolute recovery and ROR. Role Accountability Conduct timely allocation of portfolio to aligned vendors/NFTEs and conduct ongoing reviews to drive performance on the business targets through an extended team of field executives and callers Formulate tactical short term incentive plans for NFTEs to increase productivity and drive DRR Ensure various critical segments as defined by business are reviewed and performance is driven on them Ensure judicious use of hardship tools and adherence to the settlement waivers both on rate and value Conduct ongoing field visits on critical accounts and ensure proper documentation in Collect24 system of all field visits and telephone calls to customers Raise red flags in a timely manner basis deterioration in portfolio health indicators/frauds and raise timely alarms on critical incidents as per the compliance guidelines Ensure all guidelines mentioned in the SVCL are adhered to and that process hygiene is maintained at aligned agencies Ensure 100% data security using secured data transfer modes and data purging as per policy Ensure all customer complaints received are closed within time frame Conduct thorough due diligence while onboarding/offboarding/renewing a vendor and all necessary formalities are completed prior to allocating Ensure agencies raise invoices timely Monitor NFTE ACR CAPE as per the collection strategy Measures of Success Portfolio Coverage Resolution Rate Normalization/Roll back Rate Settlement waiver rate Absolute Recovery Rupee collected NFTE CAPE DRA certification of NFTEs Absolute Customer Complaints Absolute audit observations Process adherence as per MOU Technical Skills / Experience / Certifications Credit Card knowledge along with good understanding of Collection Processes Competencies critical to the role Analytical Ability Stakeholder Management Problem Solving Result Orientation Process Orientation Qualification Post-Graduate / Graduate in any discipline Preferred Industry FSI Show more Show less

Posted 3 weeks ago

Apply

0.0 - 5.0 years

0 Lacs

Ahmedabad, Gujarat

Remote

Indeed logo

EXPERIENCE THAT MATTERS Our Employees Always Come First Get the Recognition You Deserve Best Opportunity to Learn & Grow Freedom & Flexibility to Perform Balance Your Professional & Personal Life Professional Yet Friendly Environment Stay Abreast with Current Technologies Think like an Entrepreneur Opportunity to Innovate & Succeed We Help Bring Out the Best in You LAUNCH A NEW CHAPTER IN YOUR CAREER Flaunt Your Talent Vrinsoft is more than your Workstation. It is an opportunity to showcase your talent too. Fueling Sportsmanship An opportunity to work in tandem with your teammates and build your team spirit Employee Collaboration A Professional Environment to collaborate with teammates! Brainstorming & exploring various perspectives Festival Celebrations Maintain the Festive Spirit as Well – We Celebrate all Festivals Ideas Worth Spreading The perfect platform to innovate, perform well and grow. Fit For Life We believe in a healthy lifestyle & support your zeal to stay fit. Helping Hands Expert help is always available to get you out of a fix. Read Learn Discover A good place to explore your potential and continue to learn and grow. WHAT WE LOOK FOR? 01. Positive Attitude Maintain a positive attitude and follow the office culture. Embrace the core values & maintain a good team attitude. 02. Leadership Traits We welcome those with leadership skills to contribute to the team and explore their potential. 03. Excellent Team Player Good team spirit and an ability to work closely with other team members. 04. Learn and Grow Eager to learn further & expand their knowledge. Passion to further their career. 05. High Goals People who set their goals high and make an effort to achieve them. 06. Self-Motivated We like self-motivated employees who have a passion for their work? RECRUITMENT PROCESS Step 01 Shortlist Candidates Accept candidate profiles Screen profiles Evaluate & verify eligibility Initial HR interview Step 02 Practical Test Aptitude test Technical test Skill test (as applicable) Step 03 Evaluation Skill Level Basic level Advanced level Expert level Step 04 HR Round Company Policies Company Culture & Perks Remuneration Answer your questions Senior System and Server Admin 3 - 5 years Ahmedabad SERVER ADMIN Job Title: Server Administrator – On-Premise & Cloud Infrastructure Location: Ahmedabad, Gujarat, India (Work from Office) Experience: 3–5 Years Department: IT / Infrastructure & DevOps Employment Type: Full-time Keywords: Server Administration, Cloud Infrastructure (AWS / Azure / GCP), VPS Providers (GoDaddy / DigitalOcean / Linode / Vultr), Linux & Windows Servers, Server Optimization, Rollback & Deployment Automation, Monitoring Tools (Zabbix / Prometheus / Grafana), Backup, Rollback & Disaster Recovery, Security & Compliance (Firewall / IAM / SSL / VPN) About the Role We are seeking a skilled and proactive Server Administrator with 3–5 years of hands-on experience in managing on-premise and cloud-based infrastructure. This role is central to ensuring the performance, availability, security, and scalability of our internal systems, cost effective and optimum utilization or infrastructure, staging/production environments, and DevOps pipelines. You will collaborate with developers, DevOps, and IT teams to support deployments, enforce security standards, manage infrastructure hosted across physical servers, VPS providers, and major cloud platforms (AWS, Azure, GCP), and ensure proper rollback mechanisms during deployments. Key Responsibilities 1. Server Administration (Windows & Linux) Install, configure, and maintain physical and virtual servers. Manage user accounts, groups, roles, and access control policies. Monitor and optimize CPU, memory, disk, and network usage. Apply security patches, updates, and software upgrades. Troubleshoot and resolve server performance issues and outages. 2. Cloud Infrastructure Management (AWS, Azure, GCP, etc.) Deploy, configure, and maintain cloud services including: VMs, storage, networking, and managed databases Implement cost optimization strategies and usage policies. Configure IAM roles, security groups, and cloud permissions. Monitor cloud systems using native tools like AWS CloudWatch, Azure Monitor, etc. 3. Deployment & Rollback Management Collaborate with DevOps and engineering teams on infrastructure deployments. Create and maintain rollback plans and scripts to recover from failed deployments. Use version-controlled configuration management to allow quick recovery to stable states. Automate and test rollback processes to ensure minimal downtime and data integrity. Document rollback procedures and ensure teams are trained to execute them when needed. 4. Backup & Disaster Recovery Establish and maintain backup strategies for cloud and on-prem systems. Regularly verify backup integrity and execute recovery drills. Define and implement disaster recovery plans to ensure minimal downtime. 5. Security & Compliance Enforce infrastructure hardening and adhere to security best practices. Manage firewalls, encryption, antivirus, and multi-factor authentication. Audit systems for vulnerabilities and apply mitigations. Handle SSL certificate management, VPN setup, and secure remote access. 6. Monitoring & Reporting Deploy and manage monitoring tools such as: Zabbix, Prometheus, Nagios, Grafana Create and maintain system health dashboards, uptime reports, and incident logs. Respond to alerts proactively and resolve infrastructure issues efficiently. 7. Collaboration & Documentation Work closely with DevOps and development teams to support CI/CD, deployments, and testing environments. Document: System configurations Standard operating procedures (SOPs) Troubleshooting guides Change logs and incident resolutions 8. Continuous Improvement Research and recommend emerging technologies for infrastructure enhancement. Regularly review infrastructure for cost, performance, and reliability improvements. Stay current with cloud certifications, compliance standards, and industry practices. Required Skills & Qualifications 3–5 years of hands-on experience in server administration and infrastructure management. Strong knowledge of Windows and Linux systems administration. Proven experience with cloud platforms: AWS, Azure, GCP. Familiarity with VPS providers (DigitalOcean, Linode, Vultr, etc.). Proficient in configuring: Web servers (e.g., NGINX, Apache) Database servers (MySQL, PostgreSQL, MongoDB) DNS, SMTP, FTP, and caching servers Experience with source control hosting (GitLab, GitHub, Bitbucket) and integration with CI/CD tools. Understanding of networking concepts: TCP/IP, DNS, VPN, VLAN, routing, firewalls. Experience with monitoring and logging tools (e.g., Grafana, Prometheus, ELK stack). Knowledge of security compliance standards (e.g., ISO, SOC 2, GDPR, or internal IT audits). Good scripting skills in Bash, PowerShell, or Python for automation tasks. Preferred Skills Experience with infrastructure as code tools (Terraform, Ansible, CloudFormation). Basic knowledge of DevOps tools: Docker, Kubernetes, Helm. Experience with load balancing and high availability setups. Familiarity with SSL lifecycle management and certificate automation (e.g., Let's Encrypt). Soft Skills Strong problem-solving and critical thinking skills. Excellent communication and ability to collaborate across teams. Capable of managing priorities in a dynamic and fast-paced environment. Detail-oriented with a focus on documentation and standardization. Ability to handle conflict, resolve infrastructure bottlenecks, and support incident response. Why Join Us? Be a foundational part of a well-established engineering and AI-focused organization. Gain exposure to hybrid infrastructure models across cloud and on-premise. Work with modern tools and practices to support real-world AI, web, and enterprise applications. Contribute to a culture of excellence, security, and reliability. Apply Now On hr@vrinsofts.com OR Call Us on +91 7574 926643

Posted 4 weeks ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

We're Hiring: Senior SRE Engineer — in our New Delhi Office (On-Site Only) Full-Time | Office-Based | ₹125,000–₹145,000 NET/month (approx. $1,500–$1,700) Memorae is building the future of personal productivity, an AI assistant that lives natively inside WhatsApp and remembers what matters to you. Our platform helps people across the world automate reminders, follow-ups, tasks, and calendar flows with zero friction and full memory. We're now hiring a Senior Site Reliability Engineer (SRE) to join our team on-site in New Delhi . You’ll be one of the technical pillars of the company; owning the infrastructure, cloud systems, observability, and reliability architecture that powers a product being used across 15+ countries and scaling rapidly. This is not a role for someone looking to maintain the status quo. You will build, break, and rebuild with resilience and purpose , and be expected to push the limits of performance and uptime with a mindset of high ownership and execution. What You’ll Be Doing Architecting and owning our entire infrastructure stack (currently AWS + third-party integrations + OpenAI APIs) for scalability, performance, and cost efficiency. Building and managing CI/CD pipelines with automated testing, deployment, rollback, and monitoring capabilities. Designing and enforcing best practices for disaster recovery, secrets management, and zero-downtime deployments. Monitoring and alerting : You will implement robust monitoring strategies (e.g., Prometheus, Grafana, Datadog, Sentry, etc.) and ensure uptime and reliability targets. Diagnosing bottlenecks and improving performance across backend services, APIs, and data pipelines. Security by design : You’ll be responsible for securing access, ensuring compliance, and setting up defensive infrastructure layers. Collaborating tightly with the Head of Product Engineering and CTO to ensure infrastructure decisions align with our product roadmap. What We’re Looking For 5+ years of professional experience in an SRE, DevOps, or Infrastructure role. Deep hands-on experience with AWS (ECS, Lambda, RDS, S3, CloudWatch, VPCs, etc.). Proven ability to manage production-grade environments with traffic, scale, and cost-control priorities. Experience working in high-pressure, startup-like environments where systems move fast. Strong expertise in Docker, Kubernetes , or other container orchestration platforms. Strong scripting skills: Bash, Python, or Go . Experience setting up and managing CI/CD pipelines (GitHub Actions, GitLab CI/CD, or similar). Experience with infrastructure-as-code (Terraform or Pulumi preferred). Excellent communication skills. You must be able to document, communicate, and collaborate clearly across departments. Based in New Delhi or willing to relocate. This is an on-site, in-office role. Nice to Have Experience working with AI workloads or integrations with OpenAI, LangChain, etc. Knowledge of edge infrastructure , CDNs, and caching strategies for latency optimization. Familiarity with WhatsApp Business APIs or other messaging platforms. What We Offer A monthly net salary of $1,500–$1,700 depending on experience (₹125,000–₹145,000). A fast-paced, international team operating with high standards and total transparency. Clear growth path and full ownership of your domain from day one. Deep involvement in shaping the future of one of the most ambitious AI x Productivity startups globally. A no-bullshit, high-performance culture where clarity, feedback, and ambition matter more than hierarchy or ego. Show more Show less

Posted 4 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Charles Technologies is a dynamic startup based in Chennai, dedicated to creating innovative mobile applications that transform user experiences. We are looking for a talented and experienced MERN Stack Developer to join our team and lead the development of innovative web and mobile applications. Qualifications: Education: BE in Computer Science, Information Technology, or B.Tech in an IT-related field is required. A Master’s degree is a plus. Relevant certifications are also a plus. Experience: Minimum of 3 years of total experience in full stack application development. Extensive experience working with startups, small teams, and in fast-paced environments is highly desirable. Foundational Knowledge: Strong understanding of software engineering principles, product development, and web/mobile application development best practices. Technical Skills: JavaScript​ : Expert-level proficiency in JavaScript, including ES6+ features, asynchronous programming, and modern frameworks .React Native : Extensive experience in developing cross-platform mobile applications using React Native, including performance optimization and native module integration React : Advanced expertise in React for front-end development, including hooks, context API, state management libraries like Redux, and component lifecycle management Node.js : Solid knowledge of Node.js for backend development, including experience with Express.js, RESTful API design, and asynchronous programming patterns Azure Cosmos DB : Extensive experience with Azure Cosmos DB for scalable and efficient data management, including partitioning, indexing, querying, and performance tuning Azure Cloud Services : Proficiency in deploying and managing applications on Azure Cloud Services, including Azure App Services, Azure Functions, Azure Storage, and monitoring tools Git : Proficient in version control systems like Git, including branching, merging strategies, pull request workflows, and conflict resolution Azure DevOps : Experience with Azure DevOps for CI/CD pipelines, project management, automated testing, and release management API Integration : Experience in integrating RESTful APIs and third-party services, including OAuth, JWT, and other authentication and authorization mechanisms UI/UX Design : Understanding of UI/UX design principles and ability to collaborate with designers to implement responsive, accessible, and user-friendly interfaces Responsibilities Full Stack Development : Develop and maintain high-quality web and mobile applications using React Native, React, and Node.js, ensuring code quality, performance, and scalability Backend Development : Implement backend services and APIs using Node.js, ensuring scalability, security, and maintainability Database Management : Manage and optimize databases using Azure Cosmos DB, including data modelling, indexing, partitioning, and performance tuning .Version Control : Use Git for version control, including branching, merging, and pull request workflows. Conduct peer code reviews to ensure code quality and share knowledge with team members CI/CD Pipelines : Set up and maintain CI/CD pipelines using Azure DevOps, including automated testing, deployment, monitoring, and rollback strategies Peer Code Reviews : Participate in peer code reviews to ensure adherence to coding standards, identify potential issues, and share best practices Performance Optimization : Optimize application performance and ensure responsiveness across different devices and platforms, including profiling, debugging, and performance tuning Collaboration : Work closely with designers, product owners, and other developers to deliver high-quality applications. Participate in agile development processes, including sprint planning, stand-ups, and retrospectives Testing and Debugging : Conduct thorough testing and debugging to ensure the reliability and stability of applications, including unit testing, integration testing, and end-to-end testing Documentation : Create and maintain comprehensive documentation for code, APIs, and development processes, including technical specifications and user guides Continuous Improvement : Stay updated with the latest industry trends and technologies, and continuously improve development practices. Participate in knowledge-sharing sessions and contribute to the growth of the team Perks & Benefits Central Location : Conveniently located in the heart of the city, with parking facilities and well-served by public transport including buses and Chennai Metro Meals and Refreshments : Lunch, tea/coffee, snacks, and refreshments provided throughout the day Insurance : TATA AIG Family Group Insurance for INR 5.0 Lakhs (Coverage: Self + Spouse + Up to 3 Children) Professional Development : Opportunities for continuous learning and growth Team Outings and Events : Regular team-building activities and events Employee Recognition : Programs to acknowledge and reward outstanding performance How to Apply : Interested candidates can apply through LinkedIn or email us at careers@charles-technologies.com. Join us at Charles Technologies and be a part of a team that is shaping the future of mobile applications! Show more Show less

Posted 4 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru District, Karnataka

On-site

Indeed logo

Job Title: System Architect - Kubernetes & Cloud Infrastructure (12-14 Years Experience) Period : 6 Month Contractual Role Location: Bangalore/Gurgaon Job Description: This is a senior position. Candidate should plan and lead the modernization of legacy container based systems into scalable, secure, and production-grade Kubernetes (K8s) platforms across both cloud (AWS EKS, GKE, AKS) and on-prem (OpenShift, Tanzu) environments. This role requires deep expertise in Kubernetes architecture, orchestration and roll-out strategy, previous/existing coding/development, and telecom or large-scale production experience. The position also requires team leadership, strategy planning along with hands-on implementation of highly available and resilient systems in K8 environment. Key Responsibilities: 1. Analyze existing monolithic but containerized applications to plan their transformation into K8snative microservices architecture. 2. Create end-to-end Kubernetes orchestration architecture, addressing ingress, service mesh, PVCs, networking, observability, scaling, resiliency and secure communication. 3. Lead the migration of applications to K8s clusters, ensuring zero data loss, seamless user experience, and alignment with business SLAs. 4. Define and implement horizontal and vertical scaling strategies, multi-zone resilience, volume management, and optimized ingress/data flow. 5. Integrate supporting systems such as Kafka, ClickHouse, KeyCloak, MySQL/Oracle, Redis, and Nginx, ensuring security, HA, and performance tuning. 6. Architect and execute deployment strategies, including blue-green, canary, rollback, backup & restore, and K8s upgrade strategies with minimal downtime. 7. Debug complex production issues related to Kubernetes, networking (CNI), ingress, persistent volumes, and pod scheduling. 8. Mentor and lead a team of 5+ engineers, guiding them across the design, implementation, and production rollout phases. 9. Collaborate with engineering leadership to deliver project plans, timelines, architectural reviews, and technical risk assessments. 10. Serve as the technical authority and hands-on contributor for the full lifecycle of Kubernetes enablement. 11. Harden Kubernetes environments with Zero Trust, container vulnerability and DevSecOps best practices for secure, production-grade deployments. Required Skills & Experience: Kubernetes (K8s): EKS, OpenShift (OCP), Tanzu, Cluster API, Helm, Operators Container Orchestration: Docker, CRI-O, Containerd, K3s Infra as Code & Automation: Terraform, Ansible, Helm Charts, Shell, Python K8 Storage & Networking: PVCs, CSI drivers, LoadBalancers, Ingress Controllers, CNI (Calico, Flannel, Multus etc) Observability & Monitoring: Prometheus, Grafana, Loki, OpenTelemetry Messaging & Databases: Kafka, ClickHouse, MySQL/Oracle, Redis Security & Authentication: KeyCloak, OAuth2, OIDC, RBAC, GDPR Networking & API Gateways: Nginx, HAProxy Microservices Understanding: Java, GoLang, C-based services Job Type: Contractual / Temporary Contract length: 6 months Pay: From ₹80,619.83 per month Schedule: Day shift Ability to commute/relocate: Bengaluru District, Karnataka: Reliably commute or planning to relocate before starting work (Required) Work Location: In person

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat

Remote

Indeed logo

Job Title: Sr.Database Engineer Location: Ahmedabad, Gujarat Job Type: Full Time Experience: 3+ Years Department: Data Engineering About the Role: We’re looking for a skilled and motivated Database Administrator (DBA) to join our team and support both on-premises SQL Server databases and cloud-based environments in AWS (RDS, Redshift). You’ll be responsible for ensuring database performance, stability, and scalability This role is ideal for someone who thrives in both traditional database environments and cloud-native architectures, enjoys problem-solving, and takes ownership of incident response and continuous improvement. Responsibilities: Administer, configure, and maintain Microsoft SQL Server (on-premises) and AWS RDS/Redshift environments. Perform regular maintenance tasks such as backups, restores, patching, and capacity planning. Manage database security, user access, and roles across environments. Provision, configure, and manage AWS RDS (SQL Server, PostgreSQL) and Redshift instances. Implement backup strategies, monitoring, and disaster recovery solutions in the cloud. Automate routine database tasks and processes using AWS tools and scripting. Deploy and monitor AWS Glue and AWS Lambda. Troubleshoot ETL job failures, ensure data quality, and support timely delivery of data. Use tools like AWS CloudWatch, SolarWinds, and Redgate SQL Monitor for real-time performance tracking and alerting. Identify and resolve performance bottlenecks in SQL queries, indexes, and server configurations. Act as a point of contact for database-related incidents and outages. Perform root cause analysis, document findings, and work with engineering teams to implement long-term fixes. Maintain comprehensive and up-to-date documentation on database systems, configurations, and procedures. Collaborate with development and DevOps teams to support database and data platform needs. Contribute to automation and infrastructure improvements in cloud and hybrid environments. Maintain detailed documentation and knowledge base articles for internal teams. Qualifications: Experience as a database administrator, with a strong foundation in SQL Server administration, backup/restore strategies, and high availability solutions (e.g., Always On, clustering). Hands-on experience managing AWS RDS (SQL Server/PostgreSQL) and Amazon Redshift, including provisioning, scaling, backups, snapshots, and security configurations. Proficiency with monitoring tools like AWS CloudWatch, SolarWinds, and Redgate SQL Monitor, with the ability to configure alerts, identify trends, and proactively address performance bottlenecks. Expertise in performance tuning for: SQL Server: Execution plan analysis, indexing strategies, TempDB optimization, query tuning. RDS: Parameter group tuning, performance insights, instance sizing. Redshift: WLM configuration, vacuum/analyze, distribution/sort keys, and query optimization. Strong understanding of database security best practices, user access controls, encryption, and auditing. Experience managing incident response, including root cause analysis, mitigation planning, and follow-up documentation. Ability to create and maintain detailed runbooks, SOPs, and knowledge base articles for repeatable processes and troubleshooting procedures. Comfortable working in hybrid environments, with coordination across on-premises and cloud-based systems. Familiarity with automation and scripting using PowerShell, Python, or Bash to streamline database tasks and monitoring. Hands-on experience with CI/CD pipelines to support database changes and deployments using tools like AWS CodePipeline or GitLab CI. Experience integrating database deployments into DevOps pipelines, including version-controlled DDL/DML scripts, pre-deployment checks, and rollback strategies. Ability to perform manual deployments when required (e.g., via SSMS, pgAdmin, or SQL scripts) while adhering to change management processes. Ability to work independently, manage priorities, and take ownership of tasks in a distributed team environment. Strong communication and interpersonal skills, with the ability to explain technical concepts clearly to both technical and non-technical stakeholders. A proactive and detail-oriented mindset, with a focus on continuous improvement and system reliability. Why Join Us: Young Team, Thriving Culture Flat-hierarchical, friendly, engineering-oriented, and growth-focused culture. Well-balanced learning and growth opportunities Free health insurance. Office facilities with a game zone, in-office kitchen with affordable lunch service, and free snacks. Sponsorship for certifications/events and library service. Flexible work timing, leaves for life events, WFH, and hybrid options

Posted 4 weeks ago

Apply

0.0 - 5.0 years

0 Lacs

Ahmedabad, Gujarat

Remote

Indeed logo

EXPERIENCE THAT MATTERS Our Employees Always Come First Get the Recognition You Deserve Best Opportunity to Learn & Grow Freedom & Flexibility to Perform Balance Your Professional & Personal Life Professional Yet Friendly Environment Stay Abreast with Current Technologies Think like an Entrepreneur Opportunity to Innovate & Succeed We Help Bring Out the Best in You LAUNCH A NEW CHAPTER IN YOUR CAREER Flaunt Your Talent Vrinsoft is more than your Workstation. It is an opportunity to showcase your talent too. Fueling Sportsmanship An opportunity to work in tandem with your teammates and build your team spirit Employee Collaboration A Professional Environment to collaborate with teammates! Brainstorming & exploring various perspectives Festival Celebrations Maintain the Festive Spirit as Well – We Celebrate all Festivals Ideas Worth Spreading The perfect platform to innovate, perform well and grow. Fit For Life We believe in a healthy lifestyle & support your zeal to stay fit. Helping Hands Expert help is always available to get you out of a fix. Read Learn Discover A good place to explore your potential and continue to learn and grow. WHAT WE LOOK FOR? 01. Positive Attitude Maintain a positive attitude and follow the office culture. Embrace the core values & maintain a good team attitude. 02. Leadership Traits We welcome those with leadership skills to contribute to the team and explore their potential. 03. Excellent Team Player Good team spirit and an ability to work closely with other team members. 04. Learn and Grow Eager to learn further & expand their knowledge. Passion to further their career. 05. High Goals People who set their goals high and make an effort to achieve them. 06. Self-Motivated We like self-motivated employees who have a passion for their work? RECRUITMENT PROCESS Step 01 Shortlist Candidates Accept candidate profiles Screen profiles Evaluate & verify eligibility Initial HR interview Step 02 Practical Test Aptitude test Technical test Skill test (as applicable) Step 03 Evaluation Skill Level Basic level Advanced level Expert level Step 04 HR Round Company Policies Company Culture & Perks Remuneration Answer your questions Server Admin 3 - 5 years Ahmedabad SERVER ADMIN Job Title: Server Administrator – On-Premise & Cloud Infrastructure Location: Ahmedabad, Gujarat, India (Work from Office) Experience: 3–5 Years Department: IT / Infrastructure & DevOps Employment Type: Full-time Keywords: Server Administration, Cloud Infrastructure (AWS / Azure / GCP), VPS Providers (GoDaddy / DigitalOcean / Linode / Vultr), Linux & Windows Servers, Server Optimization, Rollback & Deployment Automation, Monitoring Tools (Zabbix / Prometheus / Grafana), Backup, Rollback & Disaster Recovery, Security & Compliance (Firewall / IAM / SSL / VPN) About the Role We are seeking a skilled and proactive Server Administrator with 3–5 years of hands-on experience in managing on-premise and cloud-based infrastructure. This role is central to ensuring the performance, availability, security, and scalability of our internal systems, cost effective and optimum utilization or infrastructure, staging/production environments, and DevOps pipelines. You will collaborate with developers, DevOps, and IT teams to support deployments, enforce security standards, manage infrastructure hosted across physical servers, VPS providers, and major cloud platforms (AWS, Azure, GCP), and ensure proper rollback mechanisms during deployments. Key Responsibilities 1. Server Administration (Windows & Linux) Install, configure, and maintain physical and virtual servers. Manage user accounts, groups, roles, and access control policies. Monitor and optimize CPU, memory, disk, and network usage. Apply security patches, updates, and software upgrades. Troubleshoot and resolve server performance issues and outages. 2. Cloud Infrastructure Management (AWS, Azure, GCP, etc.) Deploy, configure, and maintain cloud services including: VMs, storage, networking, and managed databases Implement cost optimization strategies and usage policies. Configure IAM roles, security groups, and cloud permissions. Monitor cloud systems using native tools like AWS CloudWatch, Azure Monitor, etc. 3. Deployment & Rollback Management Collaborate with DevOps and engineering teams on infrastructure deployments. Create and maintain rollback plans and scripts to recover from failed deployments. Use version-controlled configuration management to allow quick recovery to stable states. Automate and test rollback processes to ensure minimal downtime and data integrity. Document rollback procedures and ensure teams are trained to execute them when needed. 4. Backup & Disaster Recovery Establish and maintain backup strategies for cloud and on-prem systems. Regularly verify backup integrity and execute recovery drills. Define and implement disaster recovery plans to ensure minimal downtime. 5. Security & Compliance Enforce infrastructure hardening and adhere to security best practices. Manage firewalls, encryption, antivirus, and multi-factor authentication. Audit systems for vulnerabilities and apply mitigations. Handle SSL certificate management, VPN setup, and secure remote access. 6. Monitoring & Reporting Deploy and manage monitoring tools such as: Zabbix, Prometheus, Nagios, Grafana Create and maintain system health dashboards, uptime reports, and incident logs. Respond to alerts proactively and resolve infrastructure issues efficiently. 7. Collaboration & Documentation Work closely with DevOps and development teams to support CI/CD, deployments, and testing environments. Document: System configurations Standard operating procedures (SOPs) Troubleshooting guides Change logs and incident resolutions 8. Continuous Improvement Research and recommend emerging technologies for infrastructure enhancement. Regularly review infrastructure for cost, performance, and reliability improvements. Stay current with cloud certifications, compliance standards, and industry practices. Required Skills & Qualifications 3–5 years of hands-on experience in server administration and infrastructure management. Strong knowledge of Windows and Linux systems administration. Proven experience with cloud platforms: AWS, Azure, GCP. Familiarity with VPS providers (DigitalOcean, Linode, Vultr, etc.). Proficient in configuring: Web servers (e.g., NGINX, Apache) Database servers (MySQL, PostgreSQL, MongoDB) DNS, SMTP, FTP, and caching servers Experience with source control hosting (GitLab, GitHub, Bitbucket) and integration with CI/CD tools. Understanding of networking concepts: TCP/IP, DNS, VPN, VLAN, routing, firewalls. Experience with monitoring and logging tools (e.g., Grafana, Prometheus, ELK stack). Knowledge of security compliance standards (e.g., ISO, SOC 2, GDPR, or internal IT audits). Good scripting skills in Bash, PowerShell, or Python for automation tasks. Preferred Skills Experience with infrastructure as code tools (Terraform, Ansible, CloudFormation). Basic knowledge of DevOps tools: Docker, Kubernetes, Helm. Experience with load balancing and high availability setups. Familiarity with SSL lifecycle management and certificate automation (e.g., Let's Encrypt). Soft Skills Strong problem-solving and critical thinking skills. Excellent communication and ability to collaborate across teams. Capable of managing priorities in a dynamic and fast-paced environment. Detail-oriented with a focus on documentation and standardization. Ability to handle conflict, resolve infrastructure bottlenecks, and support incident response. Why Join Us? Be a foundational part of a well-established engineering and AI-focused organization. Gain exposure to hybrid infrastructure models across cloud and on-premise. Work with modern tools and practices to support real-world AI, web, and enterprise applications. Contribute to a culture of excellence, security, and reliability. Apply Now On hr@vrinsofts.com OR Call Us on +91 7574 926643

Posted 4 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Should have 4-6 years of experience on Application Development and support. · Devops tools o Knowledge of Jenkins, GITHUB, MAVEN, NEXUS, Kubernetes ,Docker, Terraform,Ansible o Strong coding experience in Python · SQL o Knowledge of the syntax of SQL-Statements: Insert, Update or Delete o Note of the syntax of SQL statements to read data from one or more database tables (keyword inner-/outer-join) o Queries using filters (where-clause) o Queries with sorting/grouping (such as group by, order by...) o Knowledge of Sub selects o Meaning of the terms of transaction, commit, rollback · GCP Cloud o GCP Knowledge is must to build the CI CD pipelines Principal responsibilities · Automate and Build the CI CD pipelines using CI CD tool · Automate the infra setup using devops tools · Require Good Communication skills. · Analyse the business requirement · Coding and unit testing Show more Show less

Posted 4 weeks ago

Apply

3 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About Client My client is a market-leading company with over 30 years of experience in the industry. As one of the world’s leading professional services firms, with $19.7B, with 333,640 associates worldwide, helping their clients modernize technology, reimagine processes, and transform experiences, enabling them to remain competitive in our fast-paced world.. Their Specialties in Intelligent Process Automation, Digital Engineering, Industry & Platform Solutions, Internet of Things, Artificial Intelligence, Cloud, Data, Healthcare, Banking, Finance, Fintech, Manufacturing, Retail, Technology, and Salesforce Job Summary We are seeking a highly skilled AWS DevSecOps Engineer with over 8 years of experience in Smart Products & IoT Innovation Center. Position : AWS DevSecOps Engineer Mandatory Skills: experience in Smart Products & IoT Innovation Center. Experience Required : 8 to 12 Years Notice : immediate to 15 Days Work Location : Noida Mode Of Work : Hybrid Type of Hiring : Contract Project Tenure : Long-term project GEC, Noida Position Title: AWS DevSecOps Engineer - Smart Products & IoT Innovation Center Position Summary: Our client is currently seeking a DevSecOps Engineer for IoT projects in the Smart Products & IoT Strategic Innovation Centre in India team. This role is responsible for end-to-end provisioning, management, and support of infrastructure deployed in the cloud for IoT projects. Duties & Responsibilities: Managing AWS services like : IoT Core EC2 VPC Lambda Kinesis SQS DynamoDB Elastic Search S3 API Gateway Cognito CloudWatch SES/SNS/SQS IAM Route53 Cloudfront RDS Athena KMS System Manager AWS Monitoring services AWS Organization WAF, control tower Audit Manager DevOps Guru. AWS Proton Artifact Manager AWS devOps tool: - ECR, ECS, Code commit, code build, code deploy, code pipeline, code star, Cloud 9, etc. Doing RCA, Disaster recovery, Service outage management, and backup planning. Handling production workload spread across the globe. Must handle GDPR policy in CI/CD. Drive POCs (proof of concepts) in AWS services Technical responsibility of taking the implementation from POC to large rollout. Hands-on experience to mentor team members. Present technical topics, IoT trends, etc. to the team. Qualifications and Experience: Bachelor’s degree in Software Engineering, Computer Science, Computer Engineering, or related Engineering discipline; Master’s degree or higher from IIT/IISc or other premier institutes preferred. 5 years of experience in technical architecture, including 3+ years of experience in AWS. In-depth knowledge and experience of AWS (Amazon Web Services) IoT platform and services. Hands-on experience in building and deployment for Nodejs, reactjs, react native, GO, typescript and python code based. Having good experience in AWS Security, Identity, & Compliance services. Having good experience in AWS Management & Governance services. Having good experience in a deployment framework. (GitHub, Gitlab, Jenkins). Having good experience in configuring and deploying Android and IOS application CI/CD platforms like Bitrise. AWS Professional Certified will get weightage. Exposure to Kibana and experience in Red hat. Knowledge of code promotion workflow where promotion/rollback of code should be integrated with any tool like Jira. Handled stack auto scaling for any incident raised. Also have in-depth knowledge of Python and CloudFormation. Having good experience in AWS DevOps tools and services. Must have experience in the creation and assignment of IAM roles and policy’s. Must have experience in IaC (AWS CLI and AWS Boto lib). Strong understanding of techniques such as Continuous Integration, Continuous Delivery, Test Driven Development, Cloud Development, resiliency, and security AWS Cost optimization. AWS Monitoring and Scaling. Having excellent knowledge in GIT workflow with a staging environment using AWS DevOps tools. Experience in containerized deployments & container orchestration Experience in provisioning environments, infrastructure management & monitoring Experience in designing the HA Architecture and DC-DR setup. Experience in agile development, stage gate process, minimum viable product development, and DevOps tools. Skills and Abilities Required: Can-do positive attitude, always looking to accelerate development. Driven; commit to high standards of performance and demonstrate personal ownership for getting the job done. Innovative and entrepreneurial attitude; stays up to speed on all the latest technologies and industry trends; healthy curiosity to evaluate, understand and utilize new technologies. Show more Show less

Posted 4 weeks ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Role Description Location : All UST Locations Experience Range : 5-8yrs Responsibilities Infrastructure as Code & Cloud Automation Design and implement Infrastructure as Code (IaC) using Terraform, Ansible, or equivalent for both Azure and on-prem environments. Automate provisioning and configuration management for Azure PaaS services (App Services, AKS, Storage, Key Vault, etc.). Manage Hybrid Cloud Deployments, ensuring seamless integration between Azure and on-prem alternatives. CI/CD Pipeline Development (Without Azure DevOps) Develop and maintain CI/CD pipelines using GitHub Actions or Jenkins. Automate containerized application deployment using Docker, Kubernetes (AKS). Implement canary deployments, blue-green deployments, and rollback strategies for production releases. Cloud Security & Secrets Management Implement role-based access control (RBAC) and IAM policies across cloud and on-prem environments. Secure API and infrastructure secrets using HashiCorp Vault (instead of Azure Key Vault). Monitoring, Logging & Observability Set up observability frameworks using Prometheus, Grafana, and ELK Stack (ElasticSearch, Kibana, Logstash). Implement centralized logging and monitoring across cloud and on-prem environments. Must Have Skills & Experience Cloud & DevOps Azure PaaS Services: App Services, AKS, Azure Functions, Blob Storage, Redis Cache Kubernetes & Containerization: Hands-on experience with AKS, Kubernetes, Docker CI/CD Tools: Experience with GitHub Actions, Jenkins Infrastructure as Code (IaC): Proficiency in Terraform Security & Compliance IAM & RBAC: Experience with Active Directory, Keycloak, LDAP Secrets Management: Expertise in HashiCorp Vault or Azure Key Vault Cloud Security Best Practices: API security, network security, encryption Networking & Hybrid Cloud Azure Networking: Knowledge of VNets, Private Endpoints, Load Balancers, API Gateway, Nginx Hybrid Cloud Connectivity: Experience with VPN Gateway, Private Peering Monitoring & Performance Optimization Observability tools: Prometheus, Grafana, ELK Stack, Azure Monitor & App Insights Logging & Monitoring: Experience with ElasticSearch, Logstash, OpenTelemetry, Log Analytics Good To Have Skills & Experience Experience with additional IaC tools (Ansible, Chef, Puppet) Experience with additional container orchestration platforms (OpenShift, Docker Swarm) Knowledge of advanced Azure services (e.g., Azure Logic Apps, Azure Event Grid) Familiarity with cloud-native monitoring solutions (e.g., CloudWatch, Datadog) Experience in implementing and managing multi-cloud environments Key Personal Attributes Strong problem-solving abilities Ability to work in a fast-paced and dynamic environment Excellent communication skills and ability to collaborate with cross-functional teams Proactive and self-motivated, with a strong sense of ownership and accountability. Skills Azure,Scripting,CI/CD Show more Show less

Posted 4 weeks ago

Apply

5 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job description We’re Hiring: MLOps Engineer (Azure) 🔹 Location: Ahmedabad, Gujarat 🔹 Experience: 3–5 Years * Immediate joiner will be prefer Job Summary: We are seeking a skilled and proactive MLOps Engineer with strong experience in the Azure ecosystem to join our team. You will be responsible for streamlining and automating machine learning and data pipelines, supporting scalable deployment of AI/ML models, and ensuring robust monitoring, governance, and CI/CD practices across the data and ML lifecycle. Key Responsibilities: MLOps: ● Design and implement CI/CD pipelines for machine learning workflows using Azure DevOps, GitHub Actions, or Jenkins. ● Automate model training, validation, deployment, and monitoring using tools such as Azure ML, MLflow, or KubeFlow. ● Manage model versioning, performance tracking, and rollback strategies. ● Integrate machine learning models with APIs or web services using Azure Functions, Azure Kubernetes Service (AKS), or Azure App Services. DataOps: ● Design, build, and maintain scalable data ingestion, transformation, and orchestration pipelines using Azure Data Factory, Synapse Pipelines, or Apache Airflow. ● Ensure data quality, lineage, and governance using Azure Purview or other metadata management tools. ● Monitor and optimize data workflows for performance and cost efficiency. ● Support batch and real-time data processing using Azure Stream Analytics, Event Hubs, Databricks, or Kafka. Required Skills: ● Strong hands-on experience with Azure Machine Learning, Azure Data Factory, Azure DevOps, and Azure Storage solutions. ● Proficiency in Python, Bash, and scripting for automation. ● Experience with Docker, Kubernetes, and containerized deployments in Azure. ● Good understanding of CI/CD principles, testing strategies, and ML lifecycle management. ● Familiarity with monitoring, logging, and alerting in cloud environments. ● Knowledge of data modeling, data warehousing, and SQL. Preferred Qualifications: ● Azure Certifications (e.g., Azure Data Engineer Associate, Azure AI Engineer Associate, or Azure DevOps Engineer Expert). ● Experience with Databricks, Delta Lake, or Apache Spark on Azure. ● Exposure to security best practices in ML and data environments (e.g., identity management, network security). Soft Skills: ● Strong problem-solving and communication skills. ● Ability to work independently and collaboratively with data scientists, ML engineers, and platform teams. ● Passion for automation, optimization, and driving operational excellence. Show more Show less

Posted 4 weeks ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

Job Title : Server Administrator On-Premise & Cloud Infrastructure Location : Ahmedabad, Gujarat, India (Work from Office) Experience : 4 6 Years Department : IT / Infrastructure & DevOps Employment Type : Full-time Keywords Server Administration, Cloud Infrastructure (AWS / Azure / GCP), VPS Providers (GoDaddy / DigitalOcean / Linode / Vultr), Linux & Windows Servers, Server Optimization, Rollback & Deployment Automation, Monitoring Tools (Zabbix / Prometheus / Grafana), Backup, Rollback & Disaster Recovery, Security & Compliance (Firewall / IAM / SSL / VPN) About The Role We are seeking a skilled and proactive Server Administrator with 35 years of hands-on experience in managing on-premise and cloud-based infrastructure. This role is central to ensuring the performance, availability, security, and scalability of our internal systems, cost effective and optimum utilization or infrastructure, staging/production environments, and DevOps pipelines. You will collaborate with developers, DevOps, and IT teams to support deployments, enforce security standards, manage infrastructure hosted across physical servers, VPS providers, and major cloud platforms (AWS, Azure, GCP), and ensure proper rollback mechanisms during deployments. Key Responsibilities Server Administration (Windows & Linux) : Install, configure, and maintain physical and virtual servers. Manage user accounts, groups, roles, and access control policies. Monitor and optimize CPU, memory, disk, and network usage. Apply security patches, updates, and software upgrades. Troubleshoot and resolve server performance issues and outages. Cloud Infrastructure Management (AWS, Azure, GCP, etc.) : Deploy, configure, and maintain cloud services including : VMs, storage, networking, and managed databases Implement cost optimization strategies and usage policies. Configure IAM roles, security groups, and cloud permissions. Monitor cloud systems using native tools like AWS CloudWatch, Azure Monitor, etc. Deployment & Rollback Management : Collaborate with DevOps and engineering teams on infrastructure deployments. Create and maintain rollback plans and scripts to recover from failed deployments. Use version-controlled configuration management to allow quick recovery to stable states. Automate and test rollback processes to ensure minimal downtime and data integrity. Document rollback procedures and ensure teams are trained to execute them when needed. Backup & Disaster Recovery : Establish and maintain backup strategies for cloud and on-prem systems. Regularly verify backup integrity and execute recovery drills. Define and implement disaster recovery plans to ensure minimal downtime. Security & Compliance : Enforce infrastructure hardening and adhere to security best practices. Manage firewalls, encryption, antivirus, and multi-factor authentication. Audit systems for vulnerabilities and apply mitigations. Handle SSL certificate management, VPN setup, and secure remote access. Monitoring & Reporting : Deploy and manage monitoring tools such as : Zabbix, Prometheus, Nagios, Grafana Create and maintain system health dashboards, uptime reports, and incident logs. Respond to alerts proactively and resolve infrastructure issues efficiently. Collaboration & Documentation : Work closely with DevOps and development teams to support CI/CD, deployments, and testing environments. Document System configurations Standard operating procedures (SOPs) Troubleshooting guides Change logs and incident resolutions Continuous Improvement : Research and recommend emerging technologies for infrastructure enhancement. Regularly review infrastructure for cost, performance, and reliability improvements. Stay current with cloud certifications, compliance standards, and industry practices. Required Skills & Qualifications 3 to 5 years of hands-on experience in server administration and infrastructure management. Strong knowledge of Windows and Linux systems administration. Proven experience with cloud platforms : AWS, Azure, GCP. Familiarity with VPS providers (DigitalOcean, Linode, Vultr, etc.). Proficient In Configuring Web servers (e.g., NGINX, Apache) Database servers (MySQL, PostgreSQL, MongoDB) DNS, SMTP, FTP, and caching servers Experience with source control hosting (GitLab, GitHub, Bitbucket) and integration with CI/CD tools. Understanding of networking concepts : TCP/IP, DNS, VPN, VLAN, routing, firewalls. Experience with monitoring and logging tools (e.g., Grafana, Prometheus, ELK stack). Knowledge of security compliance standards (e.g., ISO, SOC 2, GDPR, or internal IT audits). Good scripting skills in Bash, PowerShell, or Python for automation tasks. Preferred Skills Experience with infrastructure as code tools (Terraform, Ansible, CloudFormation). Basic knowledge of DevOps tools: Docker, Kubernetes, Helm. Experience with load balancing and high availability setups. Familiarity with SSL lifecycle management and certificate automation (e.g., Let's Encrypt). Soft Skills Strong problem-solving and critical thinking skills. Excellent communication and ability to collaborate across teams. Capable of managing priorities in a dynamic and fast-paced environment. Detail-oriented with a focus on documentation and standardization. Ability to handle conflict, resolve infrastructure bottlenecks, and support incident response. Why Join Us ? Be a foundational part of a well-established engineering and AI-focused organization. Gain exposure to hybrid infrastructure models across cloud and on-premise. Work with modern tools and practices to support real-world AI, web, and enterprise applications. Contribute to a culture of excellence, security, and reliability. (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

5 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

We’re Hiring: MLOps Engineer (Azure) 🔹 Location: Ahmedabad, Gujarat 🔹 Experience: 3–5 Years * Immediate joiner will be prefer Job Summary: We are seeking a skilled and proactive MLOps Engineer with strong experience in the Azure ecosystem to join our team. You will be responsible for streamlining and automating machine learning and data pipelines, supporting scalable deployment of AI/ML models, and ensuring robust monitoring, governance, and CI/CD practices across the data and ML lifecycle. Key Responsibilities:MLOps: ● Design and implement CI/CD pipelines for machine learning workflows using Azure DevOps , GitHub Actions , or Jenkins . ● Automate model training, validation, deployment, and monitoring using tools such as Azure ML , MLflow , or KubeFlow . ● Manage model versioning, performance tracking, and rollback strategies. ● Integrate machine learning models with APIs or web services using Azure Functions , Azure Kubernetes Service (AKS) , or Azure App Services . DataOps: ● Design, build, and maintain scalable data ingestion , transformation , and orchestration pipelines using Azure Data Factory , Synapse Pipelines , or Apache Airflow . ● Ensure data quality, lineage, and governance using Azure Purview or other metadata management tools. ● Monitor and optimize data workflows for performance and cost efficiency. ● Support batch and real-time data processing using Azure Stream Analytics , Event Hubs , Databricks , or Kafka . Required Skills: ● Strong hands-on experience with Azure Machine Learning , Azure Data Factory , Azure DevOps , and Azure Storage solutions . ● Proficiency in Python , Bash , and scripting for automation. ● Experience with Docker , Kubernetes , and containerized deployments in Azure. ● Good understanding of CI/CD principles , testing strategies, and ML lifecycle management. ● Familiarity with monitoring , logging , and alerting in cloud environments. ● Knowledge of data modeling , data warehousing , and SQL . Preferred Qualifications: ● Azure Certifications (e.g., Azure Data Engineer Associate , Azure AI Engineer Associate , or Azure DevOps Engineer Expert ). ● Experience with Databricks , Delta Lake , or Apache Spark on Azure. ● Exposure to security best practices in ML and data environments (e.g., identity management, network security). Soft Skills: ● Strong problem-solving and communication skills. ● Ability to work independently and collaboratively with data scientists, ML engineers, and platform teams. ● Passion for automation, optimization, and driving operational excellence. Show more Show less

Posted 1 month ago

Apply

Exploring Rollback Jobs in India

Rollback jobs in India are in demand as companies look to ensure the smooth functioning of their systems and applications by having professionals who can efficiently handle rollbacks when necessary. Rollback specialists play a crucial role in maintaining the stability and integrity of software systems.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

Average Salary Range

The average salary range for rollback professionals in India varies from ₹4-6 lakhs per year for entry-level positions to ₹12-18 lakhs per year for experienced professionals.

Career Path

In the field of rollback, a typical career path may include roles such as Junior Rollback Specialist, Rollback Analyst, Senior Rollback Engineer, and Rollback Manager.

Related Skills

In addition to expertise in rollback processes, professionals in this field often benefit from having skills in version control systems, software development lifecycle, problem-solving, and communication.

Interview Questions

  • What is a rollback in the context of software development? (basic)
  • Can you explain the difference between a rollback and a commit? (medium)
  • How do you ensure data consistency during a rollback operation? (medium)
  • What are some common challenges faced during rollback procedures? (medium)
  • Can you discuss a time when you had to perform a rollback in a production environment? (advanced)
  • How do you handle rollback failures? (advanced)
  • What are some best practices for implementing rollback procedures? (medium)
  • How do you decide when to initiate a rollback? (medium)
  • Explain the role of a rollback specialist in a DevOps environment. (medium)
  • How do you ensure the rollback process does not cause data loss? (advanced)
  • Describe a situation where you had to coordinate a rollback with multiple teams. (advanced)
  • What tools or software do you use for rollback operations? (basic)
  • How do you communicate a rollback plan to stakeholders? (medium)
  • Can you discuss the impact of a failed rollback on a production system? (advanced)
  • How do you prioritize rollback requests in a high-pressure environment? (medium)
  • Explain the difference between a full rollback and a partial rollback. (basic)
  • How do you test rollback procedures before implementing them in a live environment? (medium)
  • Can you walk us through a recent successful rollback operation you performed? (advanced)
  • What steps do you take to ensure rollback procedures are documented properly? (medium)
  • How do you handle conflicts that arise during a rollback process? (medium)
  • Describe a situation where you had to rollback changes due to a security breach. (advanced)
  • How do you handle dependencies between rollback operations? (medium)
  • What measures do you take to prevent the need for rollbacks in the first place? (medium)
  • How do you ensure data integrity after a rollback operation? (advanced)

Conclusion

As you prepare for interviews for rollback roles in India, make sure to brush up on your technical knowledge, practice problem-solving scenarios, and showcase your communication skills. With the right preparation and confidence, you can land a rewarding career in the exciting field of rollback operations. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies