Jobs
Interviews

3652 Redshift Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

30 - 38 Lacs

Gurgaon

Remote

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: Data Engineering: 6 years (Required) AWS Elastic MapReduce (EMR): 3 years (Required) AWS: 4 years (Required) Work Location: In person

Posted 1 week ago

Apply

0 years

4 - 8 Lacs

Calcutta

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Senior Associate-Data Engineer, AWS! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Senior Associate Primary Location India-Kolkata Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 25, 2025, 8:07:51 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Requisition ID # 25WD89173 Position Overview Autodesk is a world leader in 3D design software for manufacturing, simulation, construction, civil infrastructure, and entertainment. This role is a unique opportunity to maximize the success of Autodesk’s sales tool productivity investments. It requires the candidate to focus on the development of future state business process capabilities and to drive the elimination of redundancies, optimize efficiencies, lower costs, and streamline the delivery to our GTM teams. You will play an integral role in the technical development of critical operational tools and business workflows leveraging data for decision making. The role involves driving sales team productivity through creating future state best practices for business analytics. You will be responsible in creating analytical reports and develop BI tools with resulting action-oriented conclusions. Key Responsibilities Design, develop, and maintain scalable Power BI dashboards and reports, delivering actionable insights for business decision-making Build and deploy Power Automate flows to streamline and automate business processes and improve operational efficiency Translate complex data from various sources into meaningful visualizations using Power BI and other BI tools Develop bots and automation tools using Power Platform to improve reporting and workflow integration Collaborate with business stakeholders to gather requirements and deliver tailored BI solutions Demonstrate strong SQL proficiency to extract and manipulate data from multiple sources Apply data modeling and warehousing principles for BI solution architecture and performance optimization Conduct ad hoc data mining, statistical analysis, and reporting to support business needs Ensure clean, well-documented, and testable BI code with a focus on scalability and maintainability Leverage creativity and advanced visualization skills to present complex data simply and effectively Partner with cross-functional teams to integrate BI systems and align with organizational data infrastructure Provide technical solutions that improve business processes through automation and analytics Experience or knowledge in Snowflake, Amazon Redshift, and S3 is considered a strong advantage Stay current with industry trends and best practices in data analytics and business intelligence Education And Experience Bachelor’s or master’s degree in computer science, Engineering, Mathematics, Statistics, or related field 5+ years of relevant experience in business analytics, BI development, or data engineering Proven expertise in Power BI and Power Automate is essential Proficient in SQL and working with large datasets across multiple sources Hands-on experience in data modeling, scripting, and designing effective user interfaces Working knowledge of cloud data platforms such as Snowflake, Amazon Redshift, and Amazon S3 is highly desirable Experience with other BI tools like QlikView, Qlik Sense, Anaplan, or Looker is a plus Familiarity with HTML, JavaScript, and Big Data/Hadoop environments is a plus Basic knowledge of mobile app or bot development is advantageous Strong analytical mindset with advanced Excel and statistical analysis skills Excellent communication skills for both technical and non-technical audiences Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software – from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk – it’s at the core of everything we do. Our culture guides the way we work and treat each other, informs how we connect with customers and partners, and defines how we show up in the world. When you’re an Autodesker, you can do meaningful work that helps build a better world designed and made for all. Ready to shape the world and your future? Join us! Salary transparency Salary is one part of Autodesk’s competitive compensation package. Offers are based on the candidate’s experience and geographic location. In addition to base salaries, our compensation package may include annual cash bonuses, commissions for sales roles, stock grants, and a comprehensive benefits package. Diversity & Belonging We take pride in cultivating a culture of belonging where everyone can thrive. Learn more here: https://www.autodesk.com/company/diversity-and-belonging Are you an existing contractor or consultant with Autodesk? Please search for open jobs and apply internally (not on this external site).

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Senior Consultant - Cloud Data Engineer Introduction to role Are you ready to disrupt an industry and change lives? Join us at a crucial stage of our journey in becoming a digital and data-led enterprise. As a Senior Consultant - Cloud Data Engineer, you will have the opportunity to lead and innovate, transforming our ability to develop life-changing medicines. Your work will directly impact patients, empowering the business to perform at its peak by combining ground breaking science with leading digital technology platforms and data. Accountabilities Lead the design, development, and maintenance of reliable, scalable data pipelines and ETL processes using tools such as SnapLogic, Snowflake, DBT, Fivetran, Informatica, and Python. Work closely with data scientists to understand model requirements and prepare the right data pipelines for training and deploying machine learning models. Collaborate with data scientists, analysts, and business teams to understand and optimize data requirements and workflows. Apply Power BI, Spotfire, Domo, Qlik Sense to create actionable data visualizations and reports that drive business decisions. Implement standard methodologies for version control and automation using Git Actions, Liquibase, Flyway, and CI/CD tools. Optimize data storage, processing, and integration bringing to bear AWS Data Engineering tools (e.g., AWS Glue, Amazon Redshift, Amazon S3, Amazon Kinesis, AWS Lambda, Amazon EMR). Troubleshoot, debug, and resolve issues related to existing data pipelines and architectures. Ensure data security, privacy, and compliance with industry regulations and organizational policies. Provide mentorship to junior engineers, offering guidance on best practices and supporting technical growth within the team. Essential Skills/Experience SnapLogic: Expertise in SnapLogic for building, managing, and optimizing both batch and real-time data pipelines. Proficiency in using SnapLogic Designer for designing, testing, and deploying data workflows. In-depth experience with SnapLogic Snaps (e.g., REST, SOAP, SQL, AWS S3) and Ultra Pipelines for real-time data streaming and API management. AWS: Strong experience with AWS Data Engineering tools, including AWS Glue, Amazon Redshift, Amazon S3, AWS Lambda, Amazon Kinesis, AWS DMS, and Amazon EMR. Expertise in cloud data architectures, data migration strategies, and real-time data processing on AWS platforms. Snowflake: Extensive experience in Snowflake cloud data warehousing, including data modeling, query optimization, and managing ETL pipelines using DBT and Snowflake-native tools. Fivetran: Proficient in Fivetran for automating data integration from various sources to cloud-based data warehouses, optimizing connectors for data replication and transformation. Real-Time Messaging and Stream Processing: Experience with real-time data processing frameworks (e.g., Apache Kafka, Amazon Kinesis, RabbitMQ, Apache Pulsar). Desirable Skills/Experience Exposure to other cloud platforms such as Azure or Google Cloud Platform (GCP). Familiarity with data governance, data warehousing, and data lake architectures. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we combine technology skills with a scientific mindset to make a meaningful impact. Our dynamic environment offers countless opportunities to learn and grow while working on cutting-edge technologies. We are committed to driving cross-company change to disrupt the entire industry. Ready to take on this exciting challenge? Apply now! Date Posted 16-Jul-2025 Closing Date 30-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Overview TekWissen is a global workforce management provider that offers strategic talent solutions to our clients throughout India and world-wide. Our client is a company operating a marketplace for consumers, sellers, and content creators. It offers merchandise and content purchased for resale from vendors and those offered by thirdparty sellers. Job Title: Business Intelligence Engineer III Location: Pune Duration: 6 Months Job Type: Contract Work Type: Onsite Job Description The Top Responsibilities: Data Engineering on AWS: Design and implement scalable and secure data pipelines using AWS services such as the client's S3, AWS Glue, the client's Redshift, and the client's Athena. Ensure high-performance, reliable, and fault-tolerant data architectures. Data Modeling and Transformation: Develop and optimize dimensional data models to support various business intelligence and analytics use cases. Perform complex data transformations and enrichment using tools like AWS Glue, AWS Lambda, and Apache Spark. Business Intelligence and Reporting: Collaborate with stakeholders to understand reporting and analytics requirements. Build interactive dashboards and reports using visualization tools like the client's QuickSight. Data Governance and Quality: Implement data quality checks and monitoring processes to ensure the integrity and reliability of data. Define and enforce data policies, standards, and procedures. Cloud Infrastructure Management: Manage and maintain the AWS infrastructure required for the data and analytics platform. Optimize performance, cost, and security of the underlying cloud resources. Collaboration and Knowledge Sharing: Work closely with cross-functional teams, including data analysts, data scientists, and business users, to identify opportunities for data-driven insights. Share knowledge, best practices, and train other team members. Leadership Principles Ownership Deliver result Insist on the Highest Standards Mandatory Requirements 3+ years of experience as a Business Intelligence Engineer or Data Engineer, with a strong focus on AWS cloud technologies. Proficient in designing and implementing data pipelines using AWS services such as S3, Glue, Redshift, Athena, and Lambda. Expertise in data modeling, dimensional modeling, and data transformation techniques. Experience in building and deploying business intelligence solutions, including the use of tools like the client's QuickSight and Tableau. Strong SQL and Python programming skills for data processing and analysis. Understanding of cloud architecture patterns, security best practices, and cost optimization on AWS. Excellent communication and collaboration skills to work effectively with cross-functional teams. Preferred Skills Hands-on experience with Apache Spark, Airflow, or other big data technologies. Knowledge of AWS DevOps practices and tools, such as AWS CodePipeline, AWS CodeBuild, and AWS CloudFormation. Familiarity with agile software development methodologies. AWS Certification (e.g., AWS Certified Data Analytics - Specialty). Certification Requirements Any Graduate TekWissen® Group is an equal opportunity employer supporting workforce diversity.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description ShipTech is the connective tissue which connects Transportation Service Providers, First Mile, Middle Mile, and Last Mile to facilitate the shipping of billions of packages each year. Our technology solutions power Amazon's complex shipping network, ensuring seamless coordination across the entire delivery chain. We are seeking a Business Intelligence Engineer II to join our ShipTech Program and Product Growth team, focusing on driving data-driven improvements for our ecosystem. This role will be instrumental in building the right data pipeline, analyzing and optimizing the program requests, scan related data, customer experience data, trans performance metrics and product adoption/growth patterns to enable data-driven decision making for our Program and Product teams. Key job responsibilities Analysis of historical data to identify trends and support decision making, including written and verbal presentation of results and recommendations Collaborating with product and software development teams to implement analytics systems and data structures to support large-scale data analysis and delivery of analytical and machine learning models Mining and manipulating data from database tables, simulation results, and log files Identifying data needs and driving data quality improvement projects Understanding the broad range of Amazon’s and ShipTech's data resources, which to use, how, and when Thought leadership on data mining and analysis Helping to automate processes by developing deep-dive tools, metrics, and dashboards to communicate insights to the business teams Collaborating effectively with internal end-users, cross-functional software development teams, and technical support/sustaining engineering teams to solve problems and implement new solutions Develop ETL pipelines to process and analyze cross-network data. A day in the life ShipTech Program and Product Growth team is hiring for a BIE to own generating insights, defining metrics to measure and monitor, building analytical products, automation and self-serve and overall driving business improvements. The role involves combination of data mining, data-analysis, visualization, statistics, scripting, a bit of machine learning and usage of AWS services too. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3043691

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

kolkata, west bengal

On-site

You are a Data Engineer with 3+ years of experience, proficient in SQL and Python development. You will be responsible for designing, developing, and maintaining scalable data pipelines to support ETL processes using tools like Apache Airflow, AWS Glue, or similar. Your role involves optimizing and managing relational and NoSQL databases such as MySQL, PostgreSQL, MongoDB, or Cassandra for high performance and scalability. You will write advanced SQL queries, stored procedures, and functions to efficiently extract, transform, and analyze large datasets. Additionally, you will implement and manage data solutions on cloud platforms like AWS, Azure, or Google Cloud, utilizing services such as Redshift, BigQuery, or Snowflake. Your contributions to designing and maintaining data warehouses and data lakes will support analytics and BI requirements. Automation of data processing tasks through script and application development in Python or other programming languages is also part of your responsibilities. As a Data Engineer, you will implement data quality checks, monitoring, and governance policies to ensure data accuracy, consistency, and security. Collaboration with data scientists, analysts, and business stakeholders to understand data needs and translate them into technical solutions is essential. Identifying and resolving performance bottlenecks in data systems, optimizing data storage, and retrieval are key aspects. Maintaining comprehensive documentation for data processes, pipelines, and infrastructure is crucial. Staying up-to-date with the latest trends in data engineering, big data technologies, and cloud services is expected from you. You should hold a Bachelors or Masters degree in Computer Science, Information Technology, Data Engineering, or a related field. Proficiency in SQL, relational databases, NoSQL databases, Python programming, and experience with data pipeline tools and cloud platforms is required. Knowledge of big data tools like Apache Spark, Hadoop, or Kafka is a plus. Strong analytical and problem-solving skills with a focus on performance optimization and scalability are essential. Excellent verbal and written communication skills are necessary to convey technical concepts to non-technical stakeholders. You should be able to work collaboratively in cross-functional teams. Preferred certifications include AWS Certified Data Analytics, Google Professional Data Engineer, or similar. An eagerness to learn new technologies and adapt quickly in a fast-paced environment is a mindset that will be valuable in this role.,

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Eucloid is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support the development of data infrastructure on Databricks for our clients by participating in activities which may include starting from upstream and downstream technology selection to designing and building different components. Candidate will also be involved in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of the overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects, and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Qualifications B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of distributed computing and cloud databases like Redshift, BigQuery etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc, etc. Exposure to Hadoop & Shell scripting is a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, SQL only, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, and troubleshoot ETLs Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct, and document reported data defects. Create and maintain technical specification documentation. (ref:hirist.tech)

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description : is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support the development of data infrastructure on Databricks for our clients by participating in activities which may include starting from upstream and downstream technology selection to designing and building different components. Candidate will also involve in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of the overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects, and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Qualifications B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of distributed computing and cloud databases like Redshift, Big query etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc etc. Exposure to Hadoop & Shell scripting is a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, SQL, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of the Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, and troubleshoot ETLs Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct, and document reported data defects. Create and maintain technical specification documentation. (ref:hirist.tech)

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

maharashtra

On-site

The Data Analyst - Marketing position at Knya, India's leading online medical apparel brand, offers a unique opportunity for individuals with over 1 year of experience to play a pivotal role in analyzing and interpreting data related to the company's Direct-to-Consumer (D2C) marketing operations. As a Direct-to-Consumer Analytics Associate, you will collaborate with cross-functional teams to enhance the effectiveness of D2C strategies and contribute to data-driven decision-making processes. Key Responsibilities: - Analyze D2C data sets to identify trends, patterns, and insights related to customer behavior, product performance, and sales metrics. - Generate regular and ad-hoc reports to communicate key findings to relevant stakeholders. - Collaborate with marketing teams to optimize customer segmentation strategies. - Monitor and evaluate the performance of D2C channels, ad campaigns, and promotions. - Provide recommendations for optimizing marketing and sales efforts based on identified areas for improvement. - Develop and maintain forecasting models to predict future D2C sales, customer acquisition, and other relevant metrics. - Utilize predictive modeling techniques to anticipate customer behavior and market trends. - Support strategic decision-making processes with data-driven insights. - Ensure data accuracy and reliability by implementing data quality assurance processes. - Stay updated on industry best practices, tools, and technologies related to D2C analytics. Requirements: - Web Development experience for understanding and working with data layer implementations. - Proficiency in Python and SQL for data manipulation and analysis. - Strong SQL skills and experience working with large-scale datasets. - Proficiency in Looker Studio for creating dynamic, user-friendly dashboards. - Hands-on experience with AWS (e.g., Redshift, Lambda). - Bachelor's degree in a relevant field (e.g., Statistics, Mathematics, Business Analytics). - Experience in a Data Analyst role within the D2C marketing or E-commerce environment is a plus. - Familiarity with GCP and Google Tag Manager (GTM). - Understanding of Attribution Modeling and multi-touch attribution. - Strong understanding of D2C business models and key performance indicators. - Excellent communication and presentation skills. - Ability to work collaboratively in a cross-functional team environment. If you are passionate about interpreting data in the D2C space and possess the required qualifications and skills, we encourage you to send your application to hiring@knya.in and become a valuable member of our dynamic team. This is a full-time position based in Mumbai. The company offers Provident Fund benefits. Kindly provide your current CTC and location details in your application.,

Posted 1 week ago

Apply

7.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Work Location : Hyderabad What Gramener offers you Gramener will offer you an inviting workplace, talented colleagues from diverse backgrounds, career path, steady growth prospects with great scope to innovate. Our goal is to create an ecosystem of easily configurable data applications focused on storytelling for public and private use Cloud Lead – Analytics & Data Products We’re looking for a Cloud Architect/Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Roles and Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions. Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB. Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA). Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config. Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Skills And Qualifications 7-10 years of experience in cloud engineering, DevOps, or cloud architecture roles. Hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python, Bash) and infrastructure automation. Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate. Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred. About Us We help consult and deliver solutions to organizations where data is at the core of decision making. We undertake strategic data consulting for organizations in laying out the roadmap for data driven decision making, in order to equip organizations to convert data into a strategic differentiator. Through a host of our product and service offerings we analyse and visualize large amounts of data. To know more about us visit Gramener Website and Gramener Blog. Apply for this role Apply for this Role

Posted 1 week ago

Apply

18.0 - 22.0 years

0 Lacs

noida, uttar pradesh

On-site

This is a senior leadership position within the Business Information Management Practice, where you will be responsible for the overall vision, strategy, delivery, and operations of key accounts in BIM. You will work closely with the global executive team, subject matter experts, solution architects, project managers, and client teams to conceptualize, build, and operate Big Data Solutions. Your role will involve communicating with internal management, client sponsors, and senior leaders on project status, risks, solutions, and more. As a Client Delivery Leadership Role, you will be accountable for delivering at least $10 M + revenue using information management solutions such as Big Data, Data Warehouse, Data Lake, GEN AI, Master Data Management System, Business Intelligence & Reporting solutions, IT Architecture Consulting, Cloud Platforms (AWS/AZURE), and SaaS/PaaS based solutions. In addition, you will play a crucial Practice and Team Leadership Role, exhibiting qualities like self-driven initiative, customer focus, problem-solving skills, learning agility, ability to handle multiple projects, excellent communication, and leadership skills to coach and mentor staff. As a qualified candidate, you should hold an MBA in Business Management and a Bachelor of Computer Science. You should have 18+ years of prior experience, preferably including at least 5 years in the Pharma Commercial domain, delivering customer-focused information management solutions. Your skills should encompass successful end-to-end DW implementations using technologies like Big Data, Data Management, and BI technologies. Leadership qualities, team management experience, communication skills, and hands-on knowledge of databases, SQL, and reporting solutions are essential. Preferred skills include teamwork, leadership, motivation to learn and grow, ownership, cultural fit, talent management, and capability building/thought leadership. As part of Axtria, a global provider of cloud software and data analytics to the Life Sciences industry, you will contribute to transforming the product commercialization journey to drive sales growth and improve healthcare outcomes for patients. Axtria values technology innovation and offers a transparent and collaborative culture with opportunities for training, career progression, and meaningful work in a fun environment. If you are a driven and experienced professional with a passion for leadership in information management technology and the Pharma domain, this role offers a unique opportunity to make a significant impact and grow within a dynamic and innovative organization.,

Posted 1 week ago

Apply

8.0 - 12.0 years

12 - 22 Lacs

Hyderabad

Remote

Tech stack- Database: Mongodb: S3 Postgres Strong experience on Data pipelines; mapping React; Node; Python Aws; Lambda About the job Summary We are seeking a detail-oriented and proactive Data Analyst to lead our file and data operations, with a primary focus on managing data intake from our clients and ensuring data integrity throughout the pipeline. This role is vital to our operational success and will work cross-functionally to support data ingestion, transformation, validation, and secure delivery. The ideal candidate must have hands-on experience with healthcare datasets, especially medical claims data, and be proficient in managing ETL processes and data operations at scale. Responsibilities File Intake & Management Serve as the primary point of contact for receiving files from clients, ensuring all incoming data is tracked, validated, and securely stored. Monitor and automate data file ingestion using tools such as AWS S3, AWS Glue, or equivalent technologies. Troubleshoot and resolve issues related to missing or malformed files and ensure timely communication with internal and external stakeholders. Data Operations & ETL Develop, manage, and optimize ETL pipelines for processing large volumes of structured and unstructured healthcare data. Perform data quality checks, validation routines, and anomaly detection across datasets. Ensure consistency and integrity of healthcare data (e.g., EHR, medical claims, ICD/CPT/LOINC codes) during transformations and downstream consumption. Data Analysis & Reporting Collaborate with data science and analytics teams to deliver operational insights and performance metrics. Build dashboards and visualizations using Power BI or Tableau to monitor data flow, error rates, and SLA compliance. Generate summary reports and audit trails to ensure HIPAA-compliant data handling practices. Process Optimization Identify opportunities for automation and efficiency in file handling and ETL processes. Document procedures, workflows, and data dictionaries to standardize operations. Required Qualifications Bachelors or Master’s degree in Health Informatics, Data Analytics, Computer Science, or related field. 5+ years of experience in a data operations or analyst role with a strong focus on healthcare data. Demonstrated expertise in working with medical claims data, EHR systems, and healthcare coding standards (e.g., ICD, CPT, LOINC, SNOMED, RxNorm). Strong programming and scripting skills in Python and SQL for data manipulation and automation. Hands-on experience with AWS, Redshift, RDS, S3, and data visualization tools such as Power BI or Tableau. Familiarity with HIPAA compliance and best practices in handling protected health information (PHI). Excellent problem-solving skills, attention to detail, and communication abilities.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Mysuru, Karnataka, India

On-site

About ISOCRATES iSOCRATES advises on, builds, manages, and owns mission-critical Marketing, Advertising and Data platforms, technologies, and processes as the Global Leader in MADTech Resource Planning and Execution™ serving publishers, marketers, agencies and enablers. iSOCRATES has two lines of business: Products (MADTechAI™) and Services (Consulting: Strategy and Operations; Managed Services). MADTechAI is the Unified Marketing, Advertising and Data Decision Intelligence Platform. Purpose-built to Deliver Speed to Value serving B2C and B2B marketers, agencies, publishers, and their enablers. iSOCRATES is staffed 24/7/365 with its own proven specialists who save partners money, time, and achieve transparent, accountable performance while delivering extraordinary value. Savings stem from a low-cost, focused global delivery model at scale that benefits from continuous re-investment in technology and specialized training. The company is headquartered in Saint Petersburg, Florida, U.S.A. with its global delivery centers in Mysuru and Bengaluru, Karnataka, India. Job Description MADTECH.AI is your Marketing Decision Intelligence platform. Unify, transform, analyze, and visualize all your data in a single, cost-effective AI-powered hub. Gain speed to value by leaving data wrangling, model building, data visualization, and proactive problem solving to MADTECH.AI. Sharper insights, smarter decisions, faster. MADTECH.AI was spun out of well-established Inc. 5000 consultancy iSOCRATES® which advises on, builds, manages, and owns mission-critical Marketing, Advertising and Data platforms, technologies and processes as the Global Leader in MADTECH Resource Planning and Execution™ serving marketers, agencies, publishers, and their data/tech suppliers. Responsibilities Minimum 7+ years of proven working experience with React, Redux, NodeJS and TypeScript. Strong understanding of frontend features and their practical implementation. Design, build, and integrate dashboards with SaaS products. Create responsive, user-friendly interfaces using frameworks such as Bootstrap. Involvement in performance optimization and security-related tasks in the front end and back end. Integrate APIs seamlessly into the application for dynamic functionality. Responsiveness of the application using Bootstrap, Material. Knowledge of CI/CD pipelines for efficient deployment. Implement robust unit tests using Jest to ensure code quality and maintainability. Version Control System: GitHub, Git Strong proficiency in MongoDB and PostgreSQL. Troubleshoot, debug, and upgrade existing systems to enhance functionality and performance. Good To Have Experience with programming languages like Python or Kotlin. Knowledge of AWS services such as RDS, Redshift, and tools like Airbyte and Airflow. Hands-on experience with BI tools like Superset or AWS QuickSight. Exposure to digital marketing data integrations and analytics. Prior experience with microservices architecture and containerization tools like Docker and Kubernetes. Minimum Education Required Bachelor’s degree in Computer Science, or related quantitative field required (master’s degree in business administration preferred).

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

IDT (www.idt.net) is a communications and financial services company founded in 1990 and headquartered in New Jersey, US. Today it is an industry leader in prepaid communication and payment services and one of the world’s largest international voice carriers. We are listed on the NYSE, employ over 1500 people across 20+ countries, and have revenues in excess of $1.5 billion. We are looking for a Mid-level Business Intelligence Engineer to join our global team. If you are highly intelligent, motivated, ambitious, ready to learn and make a direct impact, this is your opportunity! The individual in this role will perform data analysis, ELT/ETL design and support functions to deliver on strategic initiatives to meet organizational goals across many lines of business. The interview process will be conducted in English Responsibilities: Develop, document, and test ELT/ETL solutions using industry standard tools (Snowflake, Denodo Data Virtualization, Looker) Recommend process improvements to increase efficiency and reliability in ELT/ETL development Extract data from multiple sources, integrate disparate data into a common data model, and integrate data into a target database, application, or file using efficient ELT/ ETL processes Collaborate with Quality Assurance resources to debug ELT/ETL development and ensure the timely delivery of products Should be willing to explore and learn new technologies and concepts to provide the right kind of solution Target and result oriented with strong end user focus Effective oral and written communication skills with BI team and user community Requirements: 5+ years of experience in ETL/ELT design and development, integrating data from heterogeneous OLTP systems and API solutions, and building scalable data warehouse solutions to support business intelligence and analytics Demonstrated experience in utilizing python for data engineering tasks, including transformation, advanced data manipulation, and large-scale data processing Experience in data analysis, root cause analysis and proven problem solving and analytical thinking capabilities Experience designing complex data pipelines extracting data from RDBMS, JSON, API and Flat file sources Demonstrated expertise in SQL and PLSQL programming, with advanced mastery in Business Intelligence and data warehouse methodologies, along with hands-on experience in one or more relational database systems and cloud-based database services such as Oracle, MySQL, Amazon RDS, Snowflake, Amazon Redshift, etc Proven ability to analyze and optimize poorly performing queries and ETL/ELT mappings, providing actionable recommendations for performance tuning Understanding of software engineering principles and skills working on Unix/Linux/Windows Operating systems, and experience with Agile methodologies Proficiency in version control systems, with experience in managing code repositories, branching, merging, and collaborating within a distributed development environment Excellent English communication skills Interest in business operations and comprehensive understanding of how robust BI systems drive corporate profitability by enabling data-driven decision-making and strategic insights. Pluses: Experience in developing ETL/ELT processes within Snowflake and implementing complex data transformations using built-in functions and SQL capabilities Experience using Pentaho Data Integration (Kettle) / Ab Initio ETL tools for designing, developing, and optimizing data integration workflows. Experience designing and implementing cloud-based ETL solutions using Azure Data Factory, DBT, AWS Glue, Lambda and open-source tools Experience with reporting/visualization tools (e.g., Looker) and job scheduler software Experience in Telecom, eCommerce, International Mobile Top-up Education: BS/MS in computer science, Information Systems or a related technical field or equivalent industry expertise Preferred Certification: AWS Solution Architect, AWS Cloud Data Engineer, Snowflake SnowPro Core Please attach CV in English. The interview process will be conducted in English. Only accepting applicants from INDIA

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire AWS Professionals in the following areas : AWS Data Engineer JD As Below Primary skillsets :AWS services including Glue, Pyspark, SQL, Databricks, Python Secondary skillset- Any ETL Tool, Github, DevOPs(CI-CD) Experience: 3-4yrs Degree in computer science, engineering, or similar fields Mandatory Skill Set: Python, PySpark , SQL, AWS with Designing , developing, testing and supporting data pipelines and applications. 3+ years working experience in data integration and pipeline development. 3+ years of Experience with AWS Cloud on data integration with a mix of Apache Spark, Glue, Kafka, Kinesis, and Lambda in S3 Redshift, RDS, MongoDB/DynamoDB ecosystems Databricks, Redshift experience is a major plus. 3+ years of experience using SQL in related development of data warehouse projects/applications (Oracle & amp; SQL Server) Strong real-life experience in python development especially in PySpark in AWS Cloud environment Strong SQL and NoSQL databases like MySQL, Postgres, DynamoDB, Elasticsearch Workflow management tools like Airflow AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR (equivalent tools in the GCP stack will also suffice) Good to Have : Snowflake, Palantir Foundry At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Summary Strong knowledge of AWS services including S3 AWS DMS (Database Migration Service) and AWS Redshift Serverless. Experience in setting up and managing data pipelines using AWS DMS. Proficiency in creating and managing data storage solutions using AWS S3. Proficiency in working with relational databases particularly PostgreSQL Microsoft SQL Server Oracle Experience in setting up and managing data warehouses particularly AWS Redshift Serverless. Responsibilities Analytical and Problem-Solving Skills Ability to analyze and interpret complex data sets. Experience in identifying and resolving data integration issues such as inconsistencies or discrepancies. Strong problem-solving skills to troubleshoot and resolve data integration and migration issues. Soft Skills Ability to work collaboratively with database administrators and other stakeholders to ensure integration solutions meet business requirements. Strong communication skills to document data integration processes including data source definitions data flow diagrams and system interactions. Ability to participate in design reviews and provide input on data integration plans. Willingness to stay updated with the latest data integration tools and technologies and recommend upgrades when necessary. Security and Compliance Knowledge of data security and privacy regulations. Experience in ensuring adherence to data security and privacy standards during data integration processes. Certifications Required AWS certifications such as AWS Certified Solutions Architect or AWS Certified Database - Specialty are a plus

Posted 1 week ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: A good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai

Posted 1 week ago

Apply

10.0 - 14.0 years

8 - 15 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Hybrid

We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requiremaob Title: Senior Software Engineer Full Stack Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai Timings: 11 AM 8 PM IST

Posted 1 week ago

Apply

6.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Summary JD AWS PySpark Data Engineer Experience: 6 9 years as a Data Engineer with a strong focus on PySpark and large scale data processing. PySpark Expertise: Decent to proficient in writing optimized PySpark code, including working with DataFrames, Spark SQL, and performing complex transformations. AWS Cloud Proficiency: Fair experience with core AWS services, such as S3, Glue, EMR, Lambda, and Redshift, with the ability to manage and optimize data workflows on AWS. Performance Optimization: Proven ability to optimize PySpark jobs for performance, including experience with partitioning, caching, and handling skewed data. Problem Solving Skills: Strong analytical and problem solving skills, with a focus on troubleshooting data issues and optimizing performance in distributed environments. Communication and Collaboration: Excellent communication skills to work effectively with cross functional teams and clearly document technical processes. Added advantage AWS Glue ETL: Hands on experience with AWS Glue ETL jobs, including creating and managing workflows, handling job bookmarks, and implementing transformations. Database ¿ Good working knowledge of Data warehouse like Redshift.

Posted 1 week ago

Apply

9.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Job Title - Manager - Data Analytics � Pan India Job Role & Responsibilities 9-12 Years Experience in Data Analytics and reporting, Data bricks, Power BI, Snowflake the Data Analytics Manager will lead the delivery of advanced analytics solutions across our Global team ensuring the team delivers high-quality, scalable data solutions using technologies such as Snowflake, Power BI, Microsoft Fabric, Azure, AWS, Python, SQL, and R. Lead and manage a team of 8�15 data analysts, engineers, and scientists, providing day-to-day direction, performance management, and career development. The successful candidate will foster a high-performance culture through coaching, mentoring, and continuous development, while ensuring alignment with business goals and data governance standards Act as a hands-on technical leader, guiding the team in the design and implementation of data solutions using Snowflake, Azure Synapse, AWS Redshift, and Microsoft Fabric. Oversee the development of dashboards and analytics products using Power BI, ensuring they meet business requirements and usability standards Drive the adoption of advanced analytics and machine learning models using Python, SQL, and R to support forecasting, segmentation, and operational insights. Establish and maintain team standards for code quality, documentation, testing, and version control. Design and deliver structured training plans to upskill team members in cloud platforms, analytics tools, and certifications (e.g., PL-300, SnowPro Core). Conduct regular 1:1s, performance reviews, and development planning to support individual growth and team capability. Collaborate with business stakeholders to translate requirements into actionable data solutions and ensure timely delivery Promote a culture of innovation, continuous improvement, and knowledge sharing within the team. Support the Head of Data Analytics in strategic planning, resource forecasting, and delivery governance Act as a hands-on technical leader, guiding the team in the design and implementation of data solutions using Snowflake, Azure Synapse, AWS Redshift, and Microsoft Fabric Skills Required RoleManager - Data Analytics � Pan India Industry TypeITES/BPO/KPO Functional Area Required Education B Com Employment TypeFull Time, Permanent Key Skills DATA ANALYTIC S DATA REPORTING P O W ER BI Other Information Job CodeGO/JC/685/2025 Recruiter NameAnupriya Yugesh

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Eviden, part of the Atos Group, with an annual revenue of circa € 5 billion is a global leader in data-driven, trusted and sustainable digital transformation. As a next generation digital business with worldwide leading positions in digital, cloud, data, advanced computing and security, it brings deep expertise for all industries in more than 47 countries. By uniting unique high-end technologies across the full digital continuum with 47,000 world-class talents, Eviden expands the possibilities of data and technology, now and for generations to come. Role Overview The Senior Tech Lead - AWS Data Engineering leads the design, development and optimization of data solutions on the AWS platform. The jobholder has a strong background in data engineering, cloud architecture, and team leadership, with a proven ability to deliver scalable and secure data systems. Responsibilities Lead the design and implementation of AWS-based data architectures and pipelines. Architect and optimize data solutions using AWS services such as S3, Redshift, Glue, EMR, and Lambda. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and ensure alignment with business goals. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in AWS data environments. Stay updated on the latest AWS technologies and industry trends. Key Technical Skills & Responsibilities Overall 10+Yrs of Experience in IT Minimum 5-7 years in design and development of cloud data platforms using AWS services Must have experience of design and development of data lake / data warehouse / data analytics solutions using AWS services like S3, Lake Formation, Glue, Athena, EMR, Lambda, Redshift Must be aware about the AWS access control and data security features like VPC, IAM, Security Groups, KMS etc Must be good with Python and PySpark for data pipeline building. Must have data modeling including S3 data organization experience Must have an understanding of hadoop components, No SQL database, graph database and time series database; and AWS services available for those technologies Must have experience of working with structured, semi-structured and unstructured data Must have experience of streaming data collection and processing. Kafka experience is preferred. Experience of migrating data warehouse / big data application to AWS is preferred . Must be able to use Gen AI services (like Amazon Q) for productivity gain Eligibility Criteria Bachelor’s degree in Computer Science, Data Engineering, or a related field. Extensive experience with AWS data services and tools. AWS certification (e.g., AWS Certified Data Analytics - Specialty). Experience with machine learning and AI integration in AWS environments. Strong understanding of data modeling, ETL/ELT processes, and cloud integration. Proven leadership experience in managing technical teams. Excellent problem-solving and communication skills. Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture Let’s grow together.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: BI Engineer – Amazon QuickSight Developer Job Summary We are seeking an experienced Amazon QuickSight Developer to join our BI team. This role requires deep expertise in designing and deploying intuitive, high-impact dashboards and managing all aspects of QuickSight administration. You’ll collaborate closely with data engineers and business stakeholders to create scalable BI solutions that empower data-driven decisions across the organization. Key Responsibilities Dashboard Development & Visualization Design, develop, and maintain interactive QuickSight dashboards using advanced visuals, parameters, and controls. Create reusable datasets and calculated fields using both SPICE and Direct Query modes. Implement advanced analytics such as level-aware calculations, ranking, period-over-period comparisons, and custom KPIs. Build dynamic, user-driven dashboards with multi-select filters, dropdowns, and custom date ranges. Optimize performance and usability to maximize business value and user engagement. QuickSight Administration Manage users, groups, and permissions through QuickSight and AWS IAM roles. Implement and maintain row-level security (RLS) to ensure appropriate data access. Monitor usage, SPICE capacity, and subscription resources to maintain system performance. Configure and maintain themes, namespaces, and user interfaces for consistent experiences. Work with IT/cloud teams on account-level settings and AWS integrations. Collaboration & Data Integration Partner with data engineers and analysts to understand data structures and business needs. Integrate QuickSight with AWS services such as Redshift, Athena, S3, and Glue. Ensure data quality and accuracy through robust data modeling and SQL optimization. Required Skills & Qualifications 3+ years of hands-on experience with Amazon QuickSight (development and administration). Strong SQL skills and experience working with large, complex datasets. Expert-level understanding of QuickSight security, RLS, SPICE management, and user/group administration. Strong sense of data visualization best practices and UX design principles. Proficiency with AWS data services including Redshift, Athena, S3, Glue, and IAM. Solid understanding of data modeling and business reporting frameworks. Nice To Have Experience with Python, AWS Lambda, or automating QuickSight administration via SDK or CLI. Familiarity with modern data stack tools (e.g., dbt, Snowflake, Tableau, Power BI). Apply Now If you’re passionate about building scalable BI solutions and making data, come alive through visualization, we’d love to hear from you!

Posted 1 week ago

Apply

0.0 years

0 Lacs

Mysuru, Karnataka

On-site

About iSOCRATES iSOCRATES advises on, builds, manages, and owns mission-critical Marketing, Advertising and Data platforms, technologies, and processes as the Global Leader in MADTech Resource Planning and Execution™ serving publishers, marketers, agencies and enablers. iSOCRATES has two lines of business: Products (MADTechAI™) and Services (Consulting: Strategy and Operations; Managed Services). MADTechAI is the Unified Marketing, Advertising and Data Decision Intelligence Platform. Purpose-built to Deliver Speed to Value serving B2C and B2B marketers, agencies, publishers, and their enablers. iSOCRATES is staffed 24/7/365 with its own proven specialists who save partners money, time, and achieve transparent, accountable performance while delivering extraordinary value. Savings stem from a low-cost, focused global delivery model at scale that benefits from continuous re-investment in technology and specialized training. The company is headquartered in Saint Petersburg, Florida, U.S.A. with its global delivery centers in Mysuru and Bengaluru, Karnataka, India. Job Description: MADTECH.AI is your Marketing Decision Intelligence platform. Unify, transform, analyze, and visualize all your data in a single, cost-effective AI-powered hub. Gain speed to value by leaving data wrangling, model building, data visualization, and proactive problem solving to MADTECH.AI. Sharper insights, smarter decisions, faster. MADTECH.AI was spun out of well-established Inc. 5000 consultancy iSOCRATES® which advises on, builds, manages, and owns mission-critical Marketing, Advertising and Data platforms, technologies and processes as the Global Leader in MADTECH Resource Planning and Execution™ serving marketers, agencies, publishers, and their data/tech suppliers. Responsibilities: Minimum 7+ years of proven working experience with React, Redux, NodeJS and TypeScript. Strong understanding of frontend features and their practical implementation. Design, build, and integrate dashboards with SaaS products. Create responsive, user-friendly interfaces using frameworks such as Bootstrap. Involvement in performance optimization and security-related tasks in the front end and back end. Integrate APIs seamlessly into the application for dynamic functionality. Responsiveness of the application using Bootstrap, Material. Knowledge of CI/CD pipelines for efficient deployment. Implement robust unit tests using Jest to ensure code quality and maintainability. Version Control System: GitHub, Git Strong proficiency in MongoDB and PostgreSQL. Troubleshoot, debug, and upgrade existing systems to enhance functionality and performance. Good to have: Experience with programming languages like Python or Kotlin. Knowledge of AWS services such as RDS, Redshift, and tools like Airbyte and Airflow. Hands-on experience with BI tools like Superset or AWS QuickSight. Exposure to digital marketing data integrations and analytics. Prior experience with microservices architecture and containerization tools like Docker and Kubernetes. Minimum Education Required: Bachelor’s degree in Computer Science, or related quantitative field required (master’s degree in business administration preferred).

Posted 1 week ago

Apply

4.0 years

18 - 20 Lacs

Bengaluru, Karnataka, India

On-site

This role is for one of Weekday's clients Salary range: Rs 1800000 - Rs 2000000 (ie INR 18-20 LPA) Min Experience: 4 years Location: Bangalore JobType: full-time Requirements We are seeking an experienced and detail-oriented Data Analyst with a strong background in SQL, PySpark, Python, and Power BI (PBI) to join our data and analytics team. As a Data Analyst, you will play a critical role in transforming raw data into actionable insights that drive strategic business decisions. You'll work closely with cross-functional teams including business, product, engineering, and marketing to understand data requirements, build robust data models, and deliver meaningful reports and dashboards. The ideal candidate has 4+ years of hands-on experience working in fast-paced, data-driven environments, with a strong command of data querying, scripting, and visualization. This is an excellent opportunity for someone who enjoys solving complex data problems and communicating insights to both technical and non-technical stakeholders. Key Responsibilities Data Extraction & Transformation: Use SQL and PySpark to extract, clean, transform, and aggregate large datasets from structured and unstructured sources. Data Analysis: Conduct exploratory and ad-hoc data analysis using Python and other statistical tools to identify trends, anomalies, and business opportunities. Dashboarding & Reporting: Design, develop, and maintain interactive dashboards and reports using Power BI to visualize KPIs, business metrics, and forecasts. Data Modeling: Build and maintain efficient and scalable data models to support reporting and analytics use cases. Business Collaboration: Partner with internal teams to gather requirements, understand business challenges, and deliver data-driven solutions. Performance Tracking: Monitor campaign and business performance, identify areas of improvement, and suggest data-backed strategies. Automation: Streamline and automate recurring reporting processes using Python scripting and PBI integrations. Data Governance: Ensure data accuracy, consistency, and compliance with privacy regulations and data governance frameworks. Documentation: Maintain comprehensive documentation of data workflows, pipelines, and dashboards for knowledge transfer and reproducibility. Required Skills And Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Mathematics, Statistics, or a related field. 4+ years of professional experience as a Data Analyst or in a similar role involving large-scale data analysis. Strong expertise in SQL for data querying, joins, aggregations, and optimization techniques. Hands-on experience with PySpark for big data processing and distributed computing. Proficiency in Python for data manipulation, statistical analysis, and building automation scripts. Advanced working knowledge of Power BI for building reports, dashboards, and performing DAX calculations. Strong analytical thinking, with the ability to work independently and manage multiple projects simultaneously. Excellent communication and stakeholder management skills; ability to translate complex data into simple business insights. Familiarity with cloud platforms (Azure/AWS/GCP), data warehouses (Snowflake, Redshift, BigQuery), and version control tools (Git) is a plus

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies