Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
Karnataka, India
On-site
Who You’ll Work With You will be part of the Digital Design & Merchandising, Product Creation, Planning, and Manufacturing Technology team at Converse. You will take direction and work primarily with the Demand and Supply team, supporting business planning space. You'll work with a talented team of engineers, data architects, and business stakeholders to design and implement scalable data integration solutions on cloud-based platforms to support our planning org. The successful candidate will be responsible for leading the integration of planning systems, processes, and data across the organization Who We Are Looking For We're looking for a seasoned Cloud Integration Lead with expertise in Databricks, Apache Spark, and cloud-based data integration. You'll have a strong technical background, excellent collaboration skills, and a passion for delivering high-quality solutions. The Ideal Candidate Will Have 5+ years of experience with Databricks, Apache Spark, and cloud-based data integration. Strong Technical expertise with cloud-based platforms, including AWS and or Azure cloud. Strong programming skills in languages like SQL, Python, Java, or Scala. 3+ years' experience with cloud-based data infrastructure and integration leveraging tools like S3, Airflow, EC2, AWS Glue, DynamoDB & Lambdas, Athena, AWS Code deploy, Azure Data Factory, or Google Cloud Dataflow. Experience with Jenkins and other CI/CD tools like GitLab CI/CD, CircleCI, etc. Experience with containerization using Docker and Kubernetes. Experience with infrastructure such as code using tools like Terraform or CloudFormation Experience with Agile development methodologies and version control systems like Git Experience with IT service management tools like ServiceNow, JIRA, etc. Data warehousing solutions, such as Amazon Redshift, Azure Synapse Analytics, or Google BigQuery will be a plus but not mandatory. Data science and machine learning concepts, including TensorFlow, PyTorch, or scikit-learn will be a plus but not mandatory. Strong technical background in computer science, software engineering, or a related field. Excellent collaboration, communication, and interpersonal skills. Experience with data governance, data quality, and data security principles. Ability to lead and mentor junior team members. AWS Certified Solutions Architect or AWS Certified Developer Associate or Azure Certified Solutions Architect certification. What You’ll Work On Design and implement scalable data integration solutions using Databricks, Apache Spark, and cloud-based platforms. Develop and implement cloud-based data pipelines using Databricks, Nifi, AWS Glue, Azure Data Factory, or Google Cloud Dataflow. Collaborate with cross-functional teams to deliver high-quality solutions that meet business requirements. Develop and maintain technical standards, best practices, and documentation. Integrate various data sources, including on-premises and cloud-based systems, applications, and databases. Ensure data quality, integrity, and security throughout the integration process. Collaborate with data engineering, data science, and business stakeholders to understand requirements and deliver solutions. Show more Show less
Posted 2 weeks ago
4.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Description Data Pipeline solutions based on the requirements and incorporating various optimization techniques based on various sources involved and data volume. Understanding of storage architectures such as Data Warehouse, Data Lake, and Lake houses Deciding tech stack and development standards, proposing tech solutions and architectural patterns and recommending best practices for the big data solution Providing thought leadership and mentoring to the data engineering team on how data should be stored and processed more efficiently and quickly at scale Ensure adherence with Security and Compliance policies for the products Stay up to date with evolving cloud technologies and development best practices including open-source software. Work in an Agile Environment and provide optimized solutions to the customers and JIRA for project management Proven problem-solving skills with the ability to anticipate roadblocks, diagnose problems and generate effective solutions Analyze market segments and customer base to develop market solutions Experience in working with batch processing / real-time systems using various Enhance/Support solutions using Pyspark/EMR, SQL and databases, AWS Athena, S3, Redshift, Lambda, AWS Glue, and other Data Engineering technologies. Proficiency in SQL Writing, SQL Concepts, Data Modelling Techniques, Data validation, Data quality check & Data Engineering Concepts Proficiency in design, creation, deployment, review and get the final sign off from the client by following the best practices in SDLC of existing and new products. Experience in technologies like Databricks, HDFS, Redshift, Hadoop, S3, Athena, RDS, Elastic MapReduce on AWS or similar services in GCP/Azure Scheduling and monitoring of Spark jobs using tools like Airflow, Oozie Familiar with version control tools like Git, Code Commit, Jenkins, Code Pipeline Work in a Cross functional team along with other Data Engineers, QA Engineers, and DevOps Engineers. Develop, test, and implement data solutions based on finalized design documents. Familiar with Unix/Linux and Shell Scripting Qualifications Experience: 4-7 years of experience Excellent communication and problem-solving skills. Highly proficient in Project Management principles, methods, techniques, and tools Minimum 2 to 4 years of working experience in Pyspark, SQL, AWS development Experience of working as a mentor for junior team members Hands on experience in ETL process, performance optimization techniques are a must Candidate should have taken part in Architecture design and discussion Minimum of 4 years of experience in working with batch processing/ real-time systems Using various technologies like Databricks, HDFS, Redshift, Hadoop, Elastic MapReduce on AWS, Apache Spark, Hive/Impala and HDFS and NoSQL databases or similar services in Azure or GCP Minimum of 4 years of experience working in Datawarehouse or Data Lake Projects in a role beyond just Data consumption. Minimum of 4 years of extensive working knowledge in AWS building scalable solutions. Equivalent level of experience in Azure or Google Cloud is also acceptable Minimum of 3 years of experience in programming languages (preferably Python) Experience in Pharma Domain will be a very Big Plus. Familiar with tools like Git, Code Commit, Jenkins, Code Pipeline Familiar with Unix/Linux and Shell Scripting Additional Skills: Exposure to Pharma and life sciences would be an added advantage. Certified in any cloud technologies like AWS, GCP, Azure. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Job Title: Data Analyst Trainee Location: Remote Job Type: Internship (Full-Time) Duration: 1–3 Months Stipend: ₹25,000/month Department: Data & Analytics Job Summary: We are seeking a motivated and analytical Data Analyst Trainee to join our remote analytics team. This internship is perfect for individuals eager to apply their data skills in real-world projects, generate insights, and support business decision-making through analysis, reporting, and visualization. Key Responsibilities: Collect, clean, and analyze large datasets from various sources Perform exploratory data analysis (EDA) and generate actionable insights Build interactive dashboards and reports using Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and manipulation Collaborate with cross-functional teams to understand data needs Document analytical methodologies, insights, and recommendations Qualifications: Bachelor’s degree (or final-year student) in Data Science, Statistics, Computer Science, Mathematics, or a related field Proficiency in Excel and SQL Working knowledge of Python (Pandas, NumPy, Matplotlib) or R Understanding of basic statistics and analytical methods Strong attention to detail and problem-solving ability Ability to work independently and communicate effectively in a remote setting Preferred Skills (Nice to Have): Experience with BI tools like Power BI, Tableau, or Google Data Studio Familiarity with cloud data platforms (e.g., BigQuery, AWS Redshift) Knowledge of data storytelling and KPI measurement Previous academic or personal projects in analytics What We Offer: Monthly stipend of ₹25,000 Fully remote internship Mentorship from experienced data analysts and domain experts Hands-on experience with real business data and live projects Certificate of Completion Opportunity for a full-time role based on performance Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Job description What You’ll Be Doing The Senior Data Engineer will help build the next generation of cloud-based data tools and reporting for Experian’s MCE contact center division. Valuable, accurate, and timely information is core to our success, and this highly impactful role will be an essential part of that. Delivery pace and meeting our commitments are a primary focus to ensure that we are providing information at the speed of business. As part of this, understanding the business-side logic, environment, and workflows is important, and in effect, we need someone that is an incredible problem solver. If you are a self-driven, determined engineer that loves data, creating cutting edge tools, and moving fast, this position is for you! We are a results-oriented team that is looking to attract and reward high performing individuals. Come join us! Responsibilities Include Complex Dataset Construction: Construct datasets using complex, custom stored procedures, views, and queries. Strong SQL development skills are a must, preferably within Redshift and/or PostgreSQL. Full-stack Data Solutions: Develop full lifecycle data solutions from data ingestion (using custom AWS-based data movement/ETL processes via Glue with Python code) to downstream real-time and historical reports. Business Need to Execution Focus: Understand data-driven business objectives and develop solutions leveraging various technologies and solve for those needs. Along with great problem-solving skills, a strong desire to learn our operational environment is a necessity. Delivery Speed Enablement: Build reusable data-related tools, CI/CD pipelines, and automated testing. Enable DevOps model usage focused on continuous improvement, and ultimately reduce unnecessary dependencies. Shift Security Left: Ensure security components and requirements are implemented via automation up-front as part of all solutions being developed. Focus on the Future: Stay current on industry best practices and emerging technologies and proactively translate those into data platform improvements. Be a Great Team Player: Train team members in proper coding techniques, create proper documentation as needed, and be a solid leader on the team as a senior-level engineer. Support US Operations: Operate partially within US Eastern time zone to ensure appropriate alignment and coordination with the US-based teams. Qualifications Required What your background looks like Extensive experience in modern data manipulation and preparation via SQL code and translating business requirements into usable reports. Solid automation skillset and ability to design and create solutions to drive out manual data/report assembly processes within an organization. Experience constructing reports within a BI tool while also taking ownership of upstream and downstream elements. Able to create CI/CD pipelines that perform code deployments and automated testing. Ability to identify business needs and proactively create reporting tools that will consistently add value. Strong ability and willingness to help others and be an engaged part of the team. Patience and a collaborative personality are a must; we need a true team player that can help strengthen our overall group. Goal-driven individual; must have a proven career track record of achievement. We want the best of the best and reward stellar performers! Skills 3+ years developing complex SQL code required, preferably within Redshift and/or PostgreSQL 1+ years using Python, Java, C#, or other similar object-oriented language CI/CD pipeline construction, preferably using GitHub Actions Git experience General knowledge of AWS Services, with a preference in Glue and Lambda. Infrastructure-as-code (CloudFormation, Terraform, or similar product) a plus Google Looker experience a plus (not required) Qualifications Qualifications We are looking for 4 to 8 years of experience in which 3+ years developing complex SQL code required, preferably within Redshift and/or PostgreSQL 1+ years using Python, Java, C#, or other similar object-oriented language CI/CD pipeline construction, preferably using GitHub Actions General knowledge of AWS Services, with a preference in Glue and Lambda. Infrastructure-as-code (CloudFormation, Terraform, or similar product) a plus Google Looker experience a plus (not required) Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Infrastructure Lead/Architect Job Type: Full-Time Location: On-site Hyderabad, Pune or New Delhi Job Summary Join our customer's team as an Infrastructure Lead/Architect and play a pivotal role in architecting, designing, and implementing next-generation cloud infrastructure solutions. You will drive cloud and data platform initiatives, ensure system scalability and security, and act as a technical leader, shaping the backbone of our customers’ mission-critical applications. Key Responsibilities Architect, design, and implement robust, scalable, and secure AWS cloud infrastructure utilizing services such as EC2, S3, Lambda, RDS, Redshift, and IAM. Lead the end-to-end design and deployment of high-performance, cost-efficient Databricks data pipelines, ensuring seamless integration with business objectives. Develop and manage data integration workflows using modern ETL tools in combination with Python and Java scripting. Collaborate with Data Engineering, DevOps, and Security teams to build resilient, highly available, and compliant systems aligned with operational standards. Act as a technical leader and mentor, guiding cross-functional teams through infrastructure design decisions and conducting in-depth code and architecture reviews. Oversee project planning, resource allocation, and deliverables, ensuring projects are executed on-time and within budget. Proactively identify infrastructure bottlenecks, recommend process improvements, and drive automation initiatives. Maintain comprehensive documentation and uphold security and compliance standards across the infrastructure landscape. Required Skills and Qualifications 8+ years of hands-on experience in IT infrastructure, cloud architecture, or related roles. Extensive expertise with AWS cloud services; AWS certifications are highly regarded. Deep experience with Databricks, including cluster deployment, Delta Lake, and machine learning integrations. Strong programming and scripting proficiency in Python and Java. Advanced knowledge of ETL/ELT processes and tools such as Apache NiFi, Talend, Airflow, or Informatica. Proven track record in project management, leading cross-functional teams; PMP or Agile/Scrum certifications are a plus. Familiarity with CI/CD workflows and Infrastructure as Code tools like Terraform and CloudFormation. Exceptional problem-solving, stakeholder management, and both written and verbal communication skills. Preferred Qualifications Experience with big data platforms such as Spark or Hadoop. Background in regulated environments (e.g., finance, healthcare). Knowledge of Kubernetes and AWS container orchestration (EKS). Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description As a Data Engineer on the Data and AI team, you will design and implement robust data pipelines and infrastructure that power our organization's data-driven decisions and AI capabilities. This role is critical in developing and maintaining our enterprise-scale data processing systems that handle high-volume transactions while ensuring data security, privacy compliance, and optimal performance. You'll be part of a dynamic team that designs and implements comprehensive data solutions, from real-time processing architectures to secure storage solutions and privacy-compliant data access layers. The role involves close collaboration with cross-functional teams, including software development engineers, product managers, and scientists, to create data products that power critical business capabilities. You'll have the opportunity to work with leading technologies in cloud computing, big data processing, and machine learning infrastructure, while contributing to the development of robust data governance frameworks. If you're passionate about solving complex technical challenges in high-scale environments, thrive in a collaborative team setting, and want to make a lasting impact on our organization's data infrastructure, this role offers an exciting opportunity to shape the future of our data and AI capabilities. Key job responsibilities Design and implement ETL/ELT frameworks that handle large-scale data operations, while building reusable components for data ingestion, transformation, and orchestration while ensuring data quality and reliability. Establish and maintain robust data governance standards by implementing comprehensive security controls, access management frameworks, and privacy-compliant architectures that safeguard sensitive information. Drive the implementation of data solutions, both real-time and batch, optimizing them for both analytical workloads and AI/ML applications. Lead technical design reviews and provide mentorship on data engineering best practices, identifying opportunities for architectural improvements and guiding the implementation of enhanced solutions. Build data quality frameworks with robust monitoring systems and validation processes to ensure data accuracy and reliability throughout the data lifecycle. Drive continuous improvement initiatives by evaluating and implementing new technologies and methodologies that enhance data infrastructure capabilities and operational efficiency. A day in the life The day often begins with a team stand-up to align priorities, followed by a review of data pipeline monitoring alarms to address any processing issues and ensure data quality standards are maintained across systems. Throughout the day, you'll find yourself immersed in various technical tasks, including developing and optimizing ETL/ELT processes, implementing data governance controls, and reviewing code for data processing systems. You'll work closely with software engineers, scientists, and product managers, participating in technical design discussions and sharing your expertise in data architecture and engineering best practices. Your responsibilities extend to communicating with non-technical stakeholders, explaining data-related projects and their business impact. You'll also mentor junior engineers and contribute to maintaining comprehensive technical documentation. You'll troubleshoot issues that arise in the data infrastructure, optimize the performance of data pipelines, and ensure data security and compliance with relevant regulations. Staying updated on the latest data engineering technologies and best practices is crucial, as you'll be expected to incorporate new learnings into your work. By the end of a typical day, you'll have advanced key data infrastructure initiatives, solved complex technical challenges, and improved the reliability, efficiency, and security of data systems. Whether it's implementing new data governance controls, optimizing data processing workflows, or enhancing data platforms to support new AI models, your work directly impacts the organization's ability to leverage data for critical business decisions and AI capabilities. If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About The Team The Data and Artificial Intelligence (AI) team is a new function within Customer Engagement Technology. We own the end-to-end process of defining, building, implementing, and monitoring a comprehensive data strategy. We also develop and apply Generative Artificial Intelligence (GenAI), Machine Learning (ML), Ontology, and Natural Language Processing (NLP) to customer and associate experiences. Basic Qualifications 3+ years of data engineering experience Bachelor’s degree in Computer Science, Engineering, or a related technical discipline Preferred Qualifications Experience with AWS data services (Redshift, S3, Glue, EMR, Kinesis, Lambda, RDS) and understanding of IAM security frameworks Proficiency in designing and implementing logical data models that drive physical designs Hands-on experience working with large language models, including understanding of data infrastructure requirements for AI model training Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2996966 Show more Show less
Posted 2 weeks ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description As a Data Engineer on the Data and AI team, you will implement robust data pipelines and infrastructure that power our organization's data-driven decisions and AI capabilities. This role is critical in developing and maintaining our enterprise-scale data processing systems that handle high-volume transactions while ensuring data security, privacy compliance, and optimal performance. You'll be part of a dynamic team that designs and implements comprehensive data solutions, from real-time processing architectures to secure storage solutions and privacy-compliant data access layers. The role involves close collaboration with cross-functional teams, including software development engineers, product managers, and scientists, to create data products that power critical business capabilities. You'll have the opportunity to work with leading technologies in cloud computing, big data processing, and machine learning infrastructure, while contributing to the development of robust data governance frameworks. If you're passionate about solving complex technical challenges in high-scale environments, thrive in a collaborative team setting, and want to make a lasting impact on our organization's data infrastructure, this role offers an exciting opportunity to shape the future of our data and AI capabilities. Key job responsibilities Collaborate with experienced cross-disciplinary Amazonians to conceive, design, and bring innovative ideas to market. Contribute to the implementation and maintenance of data architecture, infrastructure and storage solutions under the guidance of senior engineers, focusing on data quality and reliability. Assist in building data pipelines, pipeline orchestration, data governance framework, data quality testing and pipeline management using continuous integration and deployments while learning best practices from experienced team members. Participate in technical discussions and contribute to database design decisions, learning about scalability and reliability considerations while implementing optimized code according to team standards. Execute assigned technical tasks within larger projects, writing well-tested ETL/ELT pipelines, providing thorough documentation, and following established data engineering practices. Work in an agile environment to deliver high-quality data pipelines supporting real-time and end of day data requirements. A day in the life The day often begins with a team stand-up to align priorities, followed by a review of data pipeline monitoring alarms to address any processing issues and ensure data quality standards are maintained across systems. Throughout the day, you'll find yourself immersed in various technical tasks, including troubleshooting, data quality deep dives, developing optimized ETL/ELT processes, implementing data governance controls, unit testing code and gets it reviewed. You’ll continually improve ongoing processes, automating or simplifying data engineering efforts. You'll work closely with senior engineers, data scientists, and product managers, participating in technical design discussions and learn the business context and technologies in data architecture. Your responsibilities extend to communicating with non-technical stakeholders, explaining root cause and their business impact. You'll troubleshoot issues that arise in the data infrastructure, optimize the performance of data pipelines, and ensure data security and compliance with relevant regulations. Staying updated on the latest data engineering technologies and best practices is crucial, as you'll be expected to incorporate new learnings into your work continually. By the end of a typical day, you'll have advanced key data infrastructure initiatives, solved technical challenges, and improved the reliability, efficiency, and security of data systems. Whether it's implementing new data governance controls, optimizing data processing workflows, or enhancing data platforms to support new AI models, your work directly impacts the organization's ability to leverage data for critical business decisions and AI capabilities About The Team The Data and Artificial Intelligence (AI) team is a new function within Customer Engagement Technology. We own the end-to-end process of defining, building, implementing, and monitoring a comprehensive data strategy. We also develop and apply Generative Artificial Intelligence (GenAI), Machine Learning (ML), Ontology, and Natural Language Processing (NLP) to customer and associate experiences. Basic Qualifications 1+ years of data engineering experience Bachelor’s degree in Computer Science, Engineering, or a related technical discipline Preferred Qualifications Experience with AWS data services (Redshift, S3, Glue, EMR, Kinesis, Lambda, RDS) and understanding of IAM security frameworks Proficiency in designing and implementing logical data models that drive physical designs Hands-on experience working with large language models, including understanding of data infrastructure requirements for AI model training Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2996963 Show more Show less
Posted 2 weeks ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description As a Data Engineer on the Data and AI team, you will implement robust data pipelines and infrastructure that power our organization's data-driven decisions and AI capabilities. This role is critical in developing and maintaining our enterprise-scale data processing systems that handle high-volume transactions while ensuring data security, privacy compliance, and optimal performance. You'll be part of a dynamic team that designs and implements comprehensive data solutions, from real-time processing architectures to secure storage solutions and privacy-compliant data access layers. The role involves close collaboration with cross-functional teams, including software development engineers, product managers, and scientists, to create data products that power critical business capabilities. You'll have the opportunity to work with leading technologies in cloud computing, big data processing, and machine learning infrastructure, while contributing to the development of robust data governance frameworks. If you're passionate about solving complex technical challenges in high-scale environments, thrive in a collaborative team setting, and want to make a lasting impact on our organization's data infrastructure, this role offers an exciting opportunity to shape the future of our data and AI capabilities. Key job responsibilities Collaborate with experienced cross-disciplinary Amazonians to conceive, design, and bring innovative ideas to market. Contribute to the implementation and maintenance of data architecture, infrastructure and storage solutions under the guidance of senior engineers, focusing on data quality and reliability. Assist in building data pipelines, pipeline orchestration, data governance framework, data quality testing and pipeline management using continuous integration and deployments while learning best practices from experienced team members. Participate in technical discussions and contribute to database design decisions, learning about scalability and reliability considerations while implementing optimized code according to team standards. Execute assigned technical tasks within larger projects, writing well-tested ETL/ELT pipelines, providing thorough documentation, and following established data engineering practices. Work in an agile environment to deliver high-quality data pipelines supporting real-time and end of day data requirements. A day in the life The day often begins with a team stand-up to align priorities, followed by a review of data pipeline monitoring alarms to address any processing issues and ensure data quality standards are maintained across systems. Throughout the day, you'll find yourself immersed in various technical tasks, including troubleshooting, data quality deep dives, developing optimized ETL/ELT processes, implementing data governance controls, unit testing code and gets it reviewed. You’ll continually improve ongoing processes, automating or simplifying data engineering efforts. You'll work closely with senior engineers, data scientists, and product managers, participating in technical design discussions and learn the business context and technologies in data architecture. Your responsibilities extend to communicating with non-technical stakeholders, explaining root cause and their business impact. You'll troubleshoot issues that arise in the data infrastructure, optimize the performance of data pipelines, and ensure data security and compliance with relevant regulations. Staying updated on the latest data engineering technologies and best practices is crucial, as you'll be expected to incorporate new learnings into your work continually. By the end of a typical day, you'll have advanced key data infrastructure initiatives, solved technical challenges, and improved the reliability, efficiency, and security of data systems. Whether it's implementing new data governance controls, optimizing data processing workflows, or enhancing data platforms to support new AI models, your work directly impacts the organization's ability to leverage data for critical business decisions and AI capabilities About The Team The Data and Artificial Intelligence (AI) team is a new function within Customer Engagement Technology. We own the end-to-end process of defining, building, implementing, and monitoring a comprehensive data strategy. We also develop and apply Generative Artificial Intelligence (GenAI), Machine Learning (ML), Ontology, and Natural Language Processing (NLP) to customer and associate experiences. Basic Qualifications 1+ years of data engineering experience Bachelor’s degree in Computer Science, Engineering, or a related technical discipline Preferred Qualifications Experience with AWS data services (Redshift, S3, Glue, EMR, Kinesis, Lambda, RDS) and understanding of IAM security frameworks Proficiency in designing and implementing logical data models that drive physical designs Hands-on experience working with large language models, including understanding of data infrastructure requirements for AI model training Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2996965 Show more Show less
Posted 2 weeks ago
3.0 - 4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We’re seeking a skilled Data Scientist with expertise in SQL, Python, AWS SageMaker , and Commercial Analytics to contribute to Team. You’ll design predictive models, uncover actionable insights, and deploy scalable solutions to recommend optimal customer interactions. This role is ideal for a problem-solver passionate about turning data into strategic value. Key Responsibilities Model Development: Build, validate, and deploy machine learning models (e.g., recommendation engines, propensity models) using Python and AWS SageMaker to drive next-best-action decisions. Data Pipeline Design: Develop efficient SQL queries and ETL pipelines to process large-scale commercial datasets (e.g., customer behavior, transactional data). Commercial Analytics: Analyze customer segmentation, lifetime value (CLV), and campaign performance to identify high-impact NBA opportunities. Cross-functional Collaboration: Partner with marketing, sales, and product teams to align models with business objectives and operational workflows. Cloud Integration: Optimize model deployment on AWS, ensuring scalability, monitoring, and performance tuning. Insight Communication: Translate technical outcomes into actionable recommendations for non-technical stakeholders through visualizations and presentations. Continuous Improvement: Stay updated on advancements in AI/ML, cloud technologies, and commercial analytics trends. Qualifications Education: Bachelor’s/Master’s in Data Science, Computer Science, Statistics, or a related field. Experience: 3-4 years in data science, with a focus on commercial/customer analytics (e.g., pharma, retail, healthcare, e-commerce, or B2B sectors). Technical Skills: Proficiency in SQL (complex queries, optimization) and Python (Pandas, NumPy, Scikit-learn). Hands-on experience with AWS SageMaker (model training, deployment) and cloud services (S3, Lambda, EC2). Familiarity with ML frameworks (XGBoost, TensorFlow/PyTorch) and A/B testing methodologies. Analytical Mindset: Strong problem-solving skills with the ability to derive insights from ambiguous data. Communication: Ability to articulate technical concepts to business stakeholders. Preferred Qualifications AWS Certified Machine Learning Specialty or similar certifications. Experience with big data tools (Spark, Redshift) or ML Ops practices. Knowledge of NLP, reinforcement learning, or real-time recommendation systems. Exposure to BI tools (Tableau, Power BI) for dashboarding. Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Infrastructure Lead/Architect Job Type: Full-Time Location: On-site Hyderabad, Pune or New Delhi Job Summary Join our customer's team as an Infrastructure Lead/Architect and play a pivotal role in architecting, designing, and implementing next-generation cloud infrastructure solutions. You will drive cloud and data platform initiatives, ensure system scalability and security, and act as a technical leader, shaping the backbone of our customers’ mission-critical applications. Key Responsibilities Architect, design, and implement robust, scalable, and secure AWS cloud infrastructure utilizing services such as EC2, S3, Lambda, RDS, Redshift, and IAM. Lead the end-to-end design and deployment of high-performance, cost-efficient Databricks data pipelines, ensuring seamless integration with business objectives. Develop and manage data integration workflows using modern ETL tools in combination with Python and Java scripting. Collaborate with Data Engineering, DevOps, and Security teams to build resilient, highly available, and compliant systems aligned with operational standards. Act as a technical leader and mentor, guiding cross-functional teams through infrastructure design decisions and conducting in-depth code and architecture reviews. Oversee project planning, resource allocation, and deliverables, ensuring projects are executed on-time and within budget. Proactively identify infrastructure bottlenecks, recommend process improvements, and drive automation initiatives. Maintain comprehensive documentation and uphold security and compliance standards across the infrastructure landscape. Required Skills and Qualifications 8+ years of hands-on experience in IT infrastructure, cloud architecture, or related roles. Extensive expertise with AWS cloud services; AWS certifications are highly regarded. Deep experience with Databricks, including cluster deployment, Delta Lake, and machine learning integrations. Strong programming and scripting proficiency in Python and Java. Advanced knowledge of ETL/ELT processes and tools such as Apache NiFi, Talend, Airflow, or Informatica. Proven track record in project management, leading cross-functional teams; PMP or Agile/Scrum certifications are a plus. Familiarity with CI/CD workflows and Infrastructure as Code tools like Terraform and CloudFormation. Exceptional problem-solving, stakeholder management, and both written and verbal communication skills. Preferred Qualifications Experience with big data platforms such as Spark or Hadoop. Background in regulated environments (e.g., finance, healthcare). Knowledge of Kubernetes and AWS container orchestration (EKS). Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Overview We are looking for a Senior Data Engineer who will play a key role in designing, building, and maintaining data ingestion frameworks and scalable data pipelines. The ideal candidate should have strong expertise in platform architecture, data modeling, and cloud-based data solutions to support real-time and batch processing needs. What you'll be doing: Design, develop, and optimise DBT models to support scalable data transformations Architect and implement modern ELT pipelines using DBT and orchestration tools like Apache Airflow and Prefect Lead performance tuning and query optimization for DBT models running on Snowflake, Redshift, or Databricks Integrate DBT workflows & pipelines with AWS services (S3, Lambda, Step Functions, RDS, Glue) and event-driven architectures Implement robust data ingestion processes from multiple sources, including manufacturing execution systems (MES), Manufacturing stations, and web applications Manage and monitor orchestration tools (Airflow, Prefect) for automated DBT model execution Implement CI/CD best practices for DBT, ensuring version control, automated testing, and deployment workflows Troubleshoot data pipeline issues and provide solutions for optimizing cost and performance What you'll have: 5+ years of hands-on experience with DBT, including model design, testing, and performance tuning 5+ years of Strong SQL expertise with experience in analytical query optimization and database performance tuning 5+ years of programming experience, especially in building custom DBT macros, scripts, APIs, working with AWS services using boto3 3+ years of Experience with orchestration tools like Apache Airflow, Prefect for scheduling DBT jobs Hands-on experience in modern cloud data platforms like Snowflake, Redshift, Databricks, or Big Query Experience with AWS data services (S3, Lambda, Step Functions, RDS, SQS, CloudWatch) Familiarity with serverless architectures and infrastructure as code (CloudFormation/Terraform) Ability to effectively communicate timelines and deliver MVPs set for the sprint Strong analytical and problem-solving skills, with the ability to work across cross-functional teams Nice to haves: Experience in hardware manufacturing data processing Contributions to open-source data engineering tools Knowledge of Tableau or other BI tools for data visualization Understanding of front-end development (React, JavaScript, or similar) to collaborate effectively with UI teams or build internal tools for data visualization Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This job is with Amazon, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Description Amazon Prime is a program that provides millions of members with unlimited one-day delivery, unlimited streaming of video and music, secure online photo storage, access to kindle e-books as well as Prime special deals on Prime Day. In India, Prime members get unlimited free One-Day and Two-day delivery, video streaming and early and exclusive access to deals. After the launch in 2016, the Amazon Prime team is now looking for a detailed oriented business intelligence engineer to lead the business intelligence for Prime and drive member insights. At Amazon, we're always working to be the most customer-centric company on earth. To get there, we need exceptionally talented, bright, and driven people. We are looking for a dynamic, organized, and customer-focused Analytics expert to join our Amazon Prime Analytics team. The team supports the Amazon India Prime organization by producing and delivering metrics, data, models and strategic analyses. This is an Individual contributor role that requires an individual with excellent team leadership skills, business acumen, and the breadth to work across multiple Amazon Prime Business Teams, Data Engineering, Machine Learning and Software Development teams. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and a proven ability to work in a fast-paced and ever-changing environment. Key job responsibilities The Successful Candidate Will Work With Multiple Global Site Leaders, Business Analysts, Software Developers, Database Engineers, Product Management In Addition To Stakeholders In Business, Finance, Marketing And Service Teams To Create a Coherent Customer View. They Will Define and lead the data strategy of various analytical products owned with Prime Analytics team. Develop and improve the current data architecture using AWS Redshift, AWS S3, AWS Aurora (Postgres) and Hadoop/EMR. Improve upon the data ingestion models, ETL jobs, and alarming to maintain data integrity and data availability. Create entire ML framework for Data Scientists in AWS Bedrock, Sagemaker and EMR clusters Stay up-to-date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale with the increased data sets of advertiser experience. Design and manage data models that serve multiple Weekly Business Reports (WBRs) and other business critical reporting Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This job is with Amazon, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Description IN Data Engineering & Analytics(IDEA) Team is looking to hire a rock star Data Engineer to build data pipelines and enable ML models for Amazon India businesses. IN Data Engineering & Analytics (IDEA) team is the central Data engineering and Analytics team for all A.in businesses. The team's charter includes 1) Providing Unified Data and Analytics Infrastructure (UDAI) for all A.in teams which includes central Petabyte-scale Redshift data warehouse, analytics infrastructure and frameworks for visualizing and automating generation of reports & insights and self-service data applications for ingesting, storing, discovering, processing & querying of the data 2) Providing business specific data solutions for various business streams like Payments, Finance, Consumer & Delivery Experience. The Data Engineer will play a key role in performing data extraction, data transformation, building and managing data pipelines to ensure data availability for ML & LLM models of IN businesses. The role sits in the heart of technology & business worlds and provides opportunity for growth, high business impact and working with seasoned business leaders. An ideal candidate will be someone with sound technical background in working with SQL, Scripting (Python, typescript, javascript), databases, ML/LLM models, big data technologies such as Apache Spark (Pyspark, Spark SQL). An ideal candidate will be someone who is a self-starter that can start with a requirement & work backwards to conceive and devise best possible solution, a good communicator while driving customer interactions, a passionate learner of new technology when the need arises, a strong owner of every deliverable in the team, obsessed with customer delight, business impact and 'gets work done' in business time. Key job responsibilities Build end to end data extraction, data transformation and data pipelines to ensure data availability for ML & LLM models that are critical to IN businesses. Enable ML/LLM tools by setting up all the required underlying data infrastructure, data pipelines and permissions to generate training and inference data for the ML models. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL, Scripting and Amazon/AWS big data technologies Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Drive operational excellence strongly and build automation and mechanisms to reduce operations Enjoy working closely with your peers in a group of very smart and talented engineers. A day in the life India Data Engineering and Analytics (IDEA) team is central data engineering team for Amazon India. Our vision is to simplify and accelerate data driven decision making for Amazon India by providing cost effective, easy & timely access to high quality data. We achieve this by providing UDAI (Unified Data & Analytics Infrastructure for Amazon India) which serves as a central data platform and provides data engineering infrastructure, ready to use datasets and self-service reporting capabilities. Our core responsibilities towards India marketplace include a) providing systems(infrastructure) & workflows that allow ingestion, storage, processing and querying of data b) building ready-to-use datasets for easy and faster access to the data c) automating standard business analysis / reporting/ dash-boarding d) empowering business with self-service tools to manage data and generate insights. Basic Qualifications 2+ years of data engineering experience Experience with SQL Experience with one or more scripting language (e.g., Python, KornShell) Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
We are#hiring forone ofour #client for the role of #dataengineer #7 years #jaipur Data Engineer - DWH Data ingestion and transformation in AWS, and coordinating tasks amongst the team Our Data Engineers will typically have: Work in building and architecting multiple Data pipelines, end to end ETL and ELT processes for DBuild, maintain, and monitor batch and real-time ETL pipelines in an AWS architecture (Kinesis, S3, EMR, RedShift, etc.) Work closely with the Data Analytics teams to develop a clear understanding of data and data infrastructure needs; assist with data-related technical issues Develop Data strategy (source, flow of data, storage, and usage), best practices, and patterns Perform Data validation and quality assurance Present technical solutions to various stakeholders Provide day-to-day support of the DW and DL environments, with excellent communications across teams, monitor new deployments and services, escalating issues where appropriate. Who we prefer? Data Warehousing concepts Building ETL pipelines Performance Tuning of SQL queries Data modeling, architecture, and Design data systems Job Scheduling Frameworks Documentation Skills Good to have – AWS, EMR, Spark Education Qualification: BE / B.Tech / M.tech from Tier1 institutes Show more Show less
Posted 2 weeks ago
9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Markovate. At Markovate, we dont just follow trendswe drive them. We transform businesses through innovative AI and digital solutions that turn vision into reality. Our team harnesses breakthrough technologies to craft bespoke strategies that align seamlessly with our clients' ambitions. From AI Consulting And Gen AI Development To Pioneering AI Agents And Agentic AI, We Empower Our Partners To Lead Their Industries With Forward-thinking Precision And Unmatched Overview We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modelling. Requirements This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault Requirements : 9+ years of experience in data engineering and data architecture. Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness. Must be highly collaborative and team oriented with commitment to Responsibilities : Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze ? silver ? gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and Open Metadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions, Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modelling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great to have: Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Proficiency in SQL and at least one programming language (e.g., Python, it's like to be at Markovate : At Markovate, we thrive on collaboration and embrace every innovative idea. We invest in continuous learning to keep our team ahead in the AI/ML landscape. Transparent communication is keyevery voice at Markovate is valued. Our agile, data-driven approach transforms challenges into opportunities. We offer flexible work arrangements that empower creativity and balance. Recognition is part of our DNAyour achievements drive our success. Markovate is committed to sustainable practices and positive community impact. Our people-first culture means your growth and well-being are central to our mission. Location : hybrid model 2 days onsite. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
The Opportunity We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic data team in Gurgaon. The ideal candidate will have a strong background in designing, building, and maintaining robust, scalable, and efficient data pipelines and data warehousing solutions. You will play a crucial role in transforming raw data into actionable insights, enabling data-driven decision-making across the Responsibilities : Data Pipeline Development : Design, develop, construct, test, and maintain highly scalable data pipelines using various ETL/ELT tools and programming languages (e.g., Python, Scala, Java). Data Warehousing : Build and optimize data warehouse solutions (e.g., Snowflake, Redshift, BigQuery, Databricks) to support reporting, analytics, and machine learning initiatives. Data Modeling : Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and design optimal data models (dimensional, relational, etc.). Performance Optimization : Identify and implement solutions for data quality issues, data pipeline performance bottlenecks, and data governance challenges. Cloud Technologies : Work extensively with cloud-based data platforms (AWS, Azure, GCP) and their respective data services (e.g., S3, EC2, Lambda, Glue, Data Factory, Azure Synapse, GCS, Dataflow, BigQuery). Automation & Monitoring : Implement automation for data pipeline orchestration, monitoring, and alerting to ensure data reliability and availability. Mentorship : Mentor junior data engineers, provide technical guidance, and contribute to best practices and architectural decisions within the data team. Collaboration : Work closely with cross-functional teams, including product, engineering, and business intelligence, to deliver data solutions that meet business needs. Documentation : Create and maintain comprehensive documentation for data pipelines, data models, and data Qualifications : Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related quantitative field. 5+ years of professional experience in data engineering, with a strong focus on building and optimizing data pipelines and data warehousing solutions. Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Scala, Java). Python is highly preferred. Extensive experience with SQL and relational databases. Demonstrated experience with cloud data platforms (AWS, Azure, or GCP) and their relevant data services. Strong understanding of data warehousing concepts (e.g., Kimball methodology, OLAP, OLTP) and experience with data modeling techniques. Experience with big data technologies (e.g., Apache Spark, Hadoop, Kafka). Familiarity with version control systems (e.g., Skills : Experience with specific data warehousing solutions like Snowflake, Redshift, or Google BigQuery. Knowledge of containerization technologies (Docker, Kubernetes). Experience with CI/CD pipelines for data solutions. Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker). Understanding of machine learning concepts and how data engineering supports ML workflows. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a collaborative team in a fast-paced environment (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
5.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Role Overview : We are seeking a skilled and detail-oriented Integration Consultant with 5 to 6 years of experience to join our team. The ideal candidate will have expertise in designing, building, and maintaining data pipelines and ETL workflows, leveraging tools and technologies like AWS Glue, CloudWatch, PySpark, APIs, SQL, and Python. Key Responsibilities Pipeline Creation and Maintenance : Design, develop, and deploy scalable data pipelines. Optimize pipeline performance and ensure data accuracy and integrity. ETL Development : Create ETL workflows using AWS Glue and PySpark to process and transform large datasets. Ensure compliance with data governance and security standards. Data Analysis and Processing : Write efficient SQL queries for data extraction, transformation, and reporting. Develop Python scripts to automate data tasks and improve workflows. Monitoring and Troubleshooting : Utilize AWS CloudWatch to monitor pipeline health and performance. Identify and resolve issues in a timely manner to minimize downtime. API Integration : Integrate and manage APIs to connect external data sources and services. Collaboration : Work closely with cross-functional teams to understand data requirements and provide solutions. Communicate effectively with stakeholders to ensure successful project delivery. Requirements Required Skills and Qualifications : Experience : 5 - 6 Years o9 solutions platform exp is Mandatory. Strong experience with AWS Glue and CloudWatch . Proficiency in PySpark , Python , and SQL . Hands-on experience with API integration and management. Solid understanding of ETL processes and pipeline creation. Strong analytical and problem-solving skills. Familiarity with data security and governance best practices. Preferred Skills Knowledge of other AWS services such as S3, EC2, Lambda, or Redshift. Experience with Pyspark, API, SQL Optimization, Python Exposure to data visualization tools or frameworks. Education Bachelors degree in computer science, Information Technology, or a related field. Note : For your candidature to be considered on this job, you need to apply necessarily on the company's redirected page of this job. Please make sure you apply on the redirected page as well. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role : Data Engineer Location : Bengaluru, Karnataka, India. Type : Contract/ Freelance. About The Role We're looking for an experienced Data Engineer on Contract (4-8 years) to join our data team. You'll be key in building and maintaining our data systems on AWS. You'll use your strong skills in big data tools and cloud technology to help our analytics team get valuable insights from our data. You'll be in charge of the whole process of our data pipelines, making sure the data is good, reliable, and fast. What You'll Do Design and build efficient data pipelines using Spark / PySpark / Scala. Manage complex data processes with Airflow, creating and fixing any issues with the workflows (DAGs). Clean, transform, and prepare data for analysis. Use Python for data tasks, automation, and building tools. Work with AWS services like S3, Redshift, EMR, Glue, and Athena to manage our data infrastructure. Collaborate closely with the Analytics team to understand what data they need and provide solutions. Help develop and maintain our Node.js backend, using Typescript, for data services. Use YAML to manage the settings for our data tools. Set up and manage automated deployment processes (CI/CD) using GitHub Actions. Monitor and fix problems in our data pipelines to keep them running smoothly. Implement checks to ensure our data is accurate and consistent. Help design and build data warehouses and data lakes. Use SQL extensively to query and work with data in different systems. Work with streaming data using technologies like Kafka for real-time data processing. Stay updated on the latest data engineering technologies. Guide and mentor junior data engineers. Help create data management rules and procedures. What You'll Need Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 4-8 years of experience as a Data Engineer. Strong skills in Spark and Scala for handling large amounts of data. Good experience with Airflow for managing data workflows and understanding DAGs. Solid understanding of how to transform and prepare data. Strong programming skills in Python for data tasks and automation. Proven experience working with AWS cloud services (S3, Redshift, EMR, Glue, IAM, EC2, and Athena). Experience building data solutions for Analytics teams. Familiarity with Node.js for backend development. Experience with Typescript for backend development is a plus. Experience using YAML for configuration management. Hands-on experience with GitHub Actions for automated deployment (CI/CD). Good understanding of data warehousing concepts. Strong database skills OLAP/OLTP. Excellent command of SQL for data querying and manipulation. Experience with stream processing using Kafka or similar technologies. Excellent problem-solving, analytical, and communication skills. Ability to work well independently and as part of a team. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Company Description Seosaph-infotech is a rapidly growing company in customized software development, providing advanced technology solutions and trusted services across multiple business verticals. In Just Two Years, Seosaph-infotech Has Delivered Exceptional Solutions To Industries Such As Finance, Healthcare, And E-commerce, Establishing Itself As a Reliable IT Partner For Businesses Seeking To Enhance Their Technological Independently complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, Spark, Data Bricks Delta Lakehouse or other Cloud data warehousing technologies. Governs data design/modelling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Develop a deep understanding of the business domains like Customer, Sales, Finance, Supplier, and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Drive collaborative reviews of data model design, code, data, security features to drive data product development. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; SAP Data Model. Develop reusable data models based on cloud-centric, code-first approaches to data management and data mapping. Partner with the data stewards team for data discovery and action by business customers and stakeholders. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Assist with data planning, sourcing, collection, profiling, and transformation. Support data lineage and mapping of source system data to canonical data stores. Create Source to Target Mappings (STTM) for ETL and BI needed : Expertise in data modelling tools (ER/Studio, Erwin, IDM/ARDM models, CPG / domains ). Experience with at least one MPP database technology such as Databricks Lakehouse, Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Working knowledge of SAP data models, particularly in the context of HANA and S/4HANA, Retails Data like IRI, Nielsen Location : Remote. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Experience : 7+ Years. Location : Noida. Key Responsibilities :. Data Architecture Design :. Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with the business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management :. Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc. Collaboration with Stakeholders :. Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership :. Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security :. Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership :. Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Required Skills & Experience :. Extensive Data Architecture Expertise :. Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms :. Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g, Power BI, Tableau, Looker). Data Governance & Compliance :. Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g, GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership :. Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Certification :. Azure Certified Solution Architect, Data Engineer, Data Scientist certifications are mandatory. Pre-Sales Responsibilities :. Stakeholder Engagement : Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development : Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs) : Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication : Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations : Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. Additional Responsibilities :. Stakeholder Collaboration : Engage with stakeholders to understand their requirements and translate them into effective technical solutions. Technology Leadership : Provide technical leadership and guidance to development teams, ensuring the use of best practices and innovative solutions. Integration Management : Oversee the integration of solutions with existing systems and third-party applications, ensuring seamless interoperability and data flow. Performance Optimization : Ensure solutions are optimized for performance, scalability, and security, addressing any technical challenges that arise. Quality Assurance : Establish and enforce quality assurance standards, conducting regular reviews and testing to ensure robustness and reliability. Documentation : Maintain comprehensive documentation of the architecture, design decisions, and technical specifications. Mentoring : Mentor fellow developers and team leads, fostering a collaborative and growth-oriented environment. Qualifications Education : Bachelor's or master's degree in computer science, Information Technology, or a related field. Experience : Minimum of 7 years of experience in data architecture, with a focus on developing scalable and high-performance solutions. Technical Expertise : Proficient in architectural frameworks, cloud computing, database management, and web technologies. Analytical Thinking : Strong problem-solving skills, with the ability to analyze complex requirements and design scalable solutions. Leadership Skills : Demonstrated ability to lead and mentor technical teams, with excellent project management skills. Communication : Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
As a Business Development Executive at Mathionix Technologies Pvt Ltd, you will have the opportunity to showcase your skills in MS-Excel, Digital Marketing, Email Marketing, Social Media Marketing, and English proficiency. Your role will involve driving growth and revenue through strategic business development initiatives. Key Responsibilities Utilize MS-Excel to analyze data and identify market trends to inform business development strategies. Develop and implement digital marketing campaigns to drive lead generation and customer acquisition. Manage email marketing campaigns to nurture leads and drive conversions. Leverage social media marketing to increase brand awareness and engagement with target audiences. Use your English proficiency (spoken and written) to effectively communicate with internal teams and external stakeholders. Identify and pursue new business opportunities to expand the company's market reach. Collaborate with cross-functional teams to develop and execute business development plans that align with company goals. If you are a driven and results-oriented individual with a passion for business development and a strong proficiency in Excel, digital marketing, email marketing, and social media marketing, we invite you to join our dynamic team at Mathionix Technologies Pvt Ltd. About Company: At Mathionix Technologies, we offer a wide range of services, including mobile application development for both hybrid/cross-platform apps using React Native CLI/EXPO and REDUX, and native apps for iOS (Swift) and Android (Java). We also specialize in Progressive Web Applications (PWAs) using React.js. Our web application development expertise spans full stack, front-end, and back-end development, utilizing technologies like React.js, Next.js, Python (Django), PHP (CodeIgniter/Laravel), and Node.js. We are proficient in managing databases such as MongoDB, MySQL, PostgreSQL, SQLite, and Amazon Redshift. Our website designing services focus on creating responsive and user-friendly designs with HTML5, CSS3, Bootstrap, and UI libraries like Material UI and Ant Design for React.js. Additionally, we work with various tools such as Bitbucket, GitHub, Firebase, Socket.io, and AWS services, and have extensive experience in implementing industry-specific APIs and web services. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
CACI India, RMZ Nexity, Tower 30 4th Floor Survey No.83/1, Knowledge City Raidurg Village, Silpa Gram Craft Village, Madhapur, Serilingampalle (M), Hyderabad, Telangana 500081, India Req #1097 02 May 2025 CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. About Data Platform The Data Platform will be built and managed “as a Product” to support a Data Mesh organization. The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains. The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally. What does a Data Infrastructure Engineer do? A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business. The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re-useable solutions to reflect the business needs. Responsibilities Will Include Collaborating across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow Monitoring and reporting on the data platform performance, usage and security Designing and applying security and access control architectures to secure sensitive data You Will Have 3+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure, versioning, documentation and unit tests Strong proficiency in Cloud Formation, Python and SQL Knowledge and experience of relational databases such as Postgres, Redshift Experience using Git for code versioning, and lifecycle management Experience operating to Agile principles and ceremonies Hands-on experience with CI/CD tools such as GitLab Strong problem-solving skills and ability to work independently or in a team environment. Excellent communication and collaboration skills. A keen eye for detail, and a passion for accuracy and correctness in numbers Whilst not essential, the following skills would also be useful: Experience using Jira, or other agile project management and issue tracking software Experience with Snowflake Experience with Spatial Data Processing More About The Opportunity The Data Engineer is an excellent opportunity, and CACI Services India reward their staff well with a competitive salary and impressive benefits package which includes: Learning: Budget for conferences, training courses and other materials Health Benefits: Family plan with 4 children and parents covered Future You: Matched pension and health care package We understand the importance of getting to know your colleagues. Company meetings are held every quarter, and a training/work brief weekend is held once a year, amongst many other social events. CACI is an equal opportunities employer. Therefore, we embrace diversity and are committed to a working environment where no one will be treated less favourably on the grounds of their sex, race, disability, sexual orientation religion, belief or age. We have a Diversity & Inclusion Steering Group and we always welcome new people with fresh perspectives from any background to join the group An inclusive and equitable environment enables us to draw on expertise and unique experiences and bring out the best in each other. We champion diversity, inclusion and wellbeing and we are supportive of Veterans and people from a military background. We believe that by embracing diverse experiences and backgrounds, we can collaborate to create better outcomes for our people, our customers and our society. Other details Pay Type Salary Apply Now Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary We are seeking a highly skilled and detail-oriented Senior SQL Data Analyst to join our data-driven team. This role will be responsible for leveraging advanced SQL skills to extract, analyze, and interpret complex datasets, delivering actionable insights to support business decisions. You will work closely with cross-functional teams to identify trends, solve problems, and drive data-informed strategies across the organization. Key Responsibilities Develop, write, and optimize advanced SQL queries to retrieve and analyze data from multiple sources. Design and maintain complex data models, dashboards, and reports. Collaborate with stakeholders to understand business needs and translate them into analytical requirements. Conduct deep-dive analysis to identify key business trends and opportunities for growth or improvement. Ensure data integrity and accuracy across systems and reporting tools. Automate recurring reports and develop scalable data pipelines. Present findings in a clear, compelling way to both technical and non-technical audiences. Qualifications Required: Bachelor's degree in Computer Science, Information Systems, Mathematics, Statistics, or related field. 5+ years of experience in data analysis or a similar role with a strong focus on SQL. Expert proficiency in SQL (window functions, joins, CTEs, indexing, etc.). Strong understanding of data warehousing concepts and relational database systems (e.g., PostgreSQL, SQL Server, Snowflake, Redshift). Experience with BI tools like Tableau, Power BI, or Looker. Excellent analytical, problem-solving, and communication skills. Preferred Experience with scripting languages (Python, R) for data manipulation. Familiarity with cloud data platforms (AWS, Azure). Knowledge of ETL tools and best practices. Previous experience in a fast-paced, agile environment. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
Remote
See all the jobs at Srijan Technologies PVT LTD here: Lead Data Engineer (AWS Glue) Location: Gurugram, Haryana, India Employment Type: Full-time | Partially remote Apply by: No close date Position Overview We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have extensive experience with AWS Glue, Apache Airflow, Kafka, SQL, Python and DataOps tools and technologies. Knowledge of SAP HANA & Snowflake is a plus. This role is critical for designing, developing, and maintaining our clients data pipeline architecture, ensuring the efficient and reliable flow of data across the organization. Key Responsibilities Design, Develop, and Maintain Data Pipelines Develop robust and scalable data pipelines using AWS Glue, Apache Airflow, and other relevant technologies. Integrate various data sources, including SAP HANA, Kafka, and SQL databases, to ensure seamless data flow and processing. Optimize data pipelines for performance and reliability. Data Management and Transformation Design and implement data transformation processes to clean, enrich, and structure data for analytical purposes. Utilize SQL and Python for data extraction, transformation, and loading (ETL) tasks. Ensure data quality and integrity through rigorous testing and validation processes. Collaboration and Communication Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet their needs. Collaborate with cross-functional teams to implement DataOps practices and improve data life cycle management. Monitoring and Optimization Monitor data pipeline performance and implement improvements to enhance efficiency and reduce latency. Troubleshoot and resolve data-related issues, ensuring minimal disruption to data workflows. Implement and manage monitoring and alerting systems to proactively identify and address potential issues. Documentation and Best Practices Maintain comprehensive documentation of data pipelines, transformations, and processes. Adhere to best practices in data engineering, including code versioning, testing, and deployment procedures. Stay up-to-date with the latest industry trends and technologies in data engineering and DataOps. Required Skills and Qualifications Technical Expertise Extensive experience with AWS Glue for data integration and transformation. Proficient in Apache Airflow for workflow orchestration. Strong knowledge of Kafka for real-time data streaming and processing. Advanced SQL skills for querying and managing relational databases. Proficiency in Python for scripting and automation tasks. Experience with SAP HANA for data storage and management. Familiarity with DataOps tools and methodologies for continuous integration and delivery in data engineering. Preferred Skills Knowledge of Snowflake for cloud-based data warehousing solutions. Experience with other AWS data services such as Redshift, S3, and Athena. Familiarity with big data technologies such as Hadoop, Spark, and Hive. Soft Skills Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Detail-oriented with a commitment to data quality and accuracy. Ability to work independently and manage multiple projects simultaneously. About Us We turn customer challenges into growth opportunities. Material is a global strategy partner to the worlds most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Why Work for Material In addition to fulfilling, high-impact work, company culture and benefits are integral to determining if a job is a right fit for you. Heres a bit about who we are and highlights around what we offer. Who We Are & What We Care About Material is a global company and we work with best-of-class brands worldwide. We also create and launch new brands and products, putting innovation and value creation at the center of our practice. Our clients are in the top of their class, across industry sectors from technology to retail, transportation, finance, and healthcare. Material employees join a peer group of exceptionally talented colleagues across the company, the country, and even the world. We develop capabilities, craft and leading-edge market offerings across seven global practices including strategy and insights, design, data & analytics, technology and tracking. Our engagement management team makes it all hum for clients. We prize inclusion and interconnectedness. We amplify our impact through the people, perspectives, and expertise we engage in our work. Our commitment to deep human understanding combined with a science & systems approach uniquely equips us to bring a rich frame of reference to our work. A community focused on learning and making an impact. Material is an outcomes-focused company. We create experiences that matter, create new value and make a difference in people's lives. What We Offer Professional Development and Mentorship. Hybrid work mode with remote friendly workplace (6 times in a row Great Place To Work Certified). Health and Family Insurance. 40 Leaves per year along with maternity & paternity leaves. Wellness, meditation, and counseling sessions. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. Those in cloud operations at PwC will focus on managing and optimising cloud infrastructure and services to enable seamless operations and high availability for clients. You will be responsible for monitoring, troubleshooting, and implementing industry leading practices for cloud-based systems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Database Admin Database Lead/Admin will coordinate the day-to-day activities of the operational systems, processes, and infrastructure required for all service offerings being developed that assist clients on their cloud Managed Service Delivery. This individual will work with one or multiple clients to gather requirements and the corresponding work needed based on the client’s Cloud Journey roadmap. They will manage the day-to-day business of operations including various stakeholders and internal and external delivery partners. Responsibilities Database role supports our services that focus on Database technologies, including Amazon Aurora, DynamoDB, Redshift, Athena, Couchbase, Cassandra, MySQL, MS-SQL and Oracle. Extensive experience with Installation, Configuration, Patching, Backup-Recovery, Configuring Replication on Linux/Windows and CentOS infrastructure. Experience in involving discussion with clients for DB requirements, performance, and integration issues and providing better solutions or approaches along with capacity planning. Responsible for identifying and resolving performance bottlenecks in relation to CPU, I/O, Memory, and DB Architecture Responsible for Database migration i.e., on-prem to on-prem or to cloud Responsible for resource layout for new application from DB side Day-to-Day Production DB support and maintenance of different versions of server database Design and build function-centric solutions in the context of transition from traditional, legacy platforms to microservices architectures Identifies trends and assess opportunities to improve processes and execution. Raises and tracks issues and conflicts, removes barriers, resolves issues of medium complexity involving partners and calls out to appropriate levels when required. Solicits and responds to feedback while gaining dedication and support. Stays up to date on industry regulations, trends, and technology. Coordinates with management to ensure all operational, administrative, and compliance functions within the team are being carried out in accordance with regulatory standard methodologies. Qualifications Bachelor’s degree in Computer Science or related technology field preferred Minimum of 4 years of hands-on experience on Database Technologies. Strong working knowledge of ITIL principles and ITSM Current understanding of industry trends and methodologies Outstanding verbal and written communication skills Excellent attention to detail Strong interpersonal skills and leadership qualities Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.
The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.
In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect
Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming
As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2