Jobs
Interviews

3597 Redshift Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

India

On-site

Essential Functions: Analyze data to discover and interpret trends, patterns, and relationships Responsible for the integrity of automated reports Design, implement and analyze controlled experiments to assess and optimize new opportunities across business channels Analyze and Identify best-performing customer segments to help strategize precise targeting methods Develop and maintain reporting dashboards to support decision-making among data analysts and the Technical Solutions team. Monitor and evaluate trends for performance and opportunities Work cross-functionally to establish instrumentation and reporting standards Create dashboards and reports to communicate actionable data Review data quality and provide guidance and controls to upstream data providers and sources Perform ad hoc data requests Stay current on industry tools, techniques and competitor marketing strategies. Manage delivery of scheduled and ad hoc reports, noting business trends in new customer counts, order / sales impact of marketing triggers and promoted product performance. Analyze results and develop performance improvement opportunities Education and Experience: Bachelor’s degree in Mathematics, Statistics, or other quantitative field Experience with Tableau Tableau Qualified Associate Certification preferred Experience with SQL, Postgres, and RedShift a plus High proficiency in Excel modeling, data mining, and scenario analysis Highly analytical and quantitative, with strong attention to detail Self-starter with excellent written and verbal communication as well as interpersonal skills Ability to thrive in a fast-paced, ambiguous, interruptive environment Ability to work independently as well as work collaboratively toward a common goal

Posted 1 day ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Us SentiLink provides innovative identity and risk solutions, empowering institutions and individuals to transact confidently with one another. By building the future of identity verification in the United States and reinventing the currently clunky, ineffective, and expensive process, we believe strongly that the future will be 10x better. We’ve had tremendous traction and are growing extremely quickly. Already our real-time APIs have helped verify hundreds of millions of identities, beginning with financial services. In 2021, we raised a $70M Series B round, led by Craft Ventures to rapidly scale our best in class products. We’ve earned coverage and awards from TechCrunch, CNBC, Bloomberg, Forbes, Business Insider, PYMNTS, American Banker, LendIt, and have been named to the Forbes Fintech 50 list consecutively since 2023. Last but not least, we’ve even been a part of history -- we were the first company to go live with the eCBSV and testified before the United States House of Representatives. About The Role Are you passionate about creating world-class solutions that fuel product stability and continuously improve infrastructure operations? We’re looking for a driven Infrastructure Engineer to architect, implement, and maintain powerful observability systems that safeguard the performance and reliability of our most critical systems. In this role, you’ll take real ownership—collaborating with cross-functional teams to shape best-in-class observability standards, troubleshoot complex issues, and fine-tune monitoring tools to exceed SLA requirements. If you’re ready to design high-quality solutions, influence our technology roadmap, and make a lasting impact on our product’s success, we want to meet you! Responsibilities Improve alerting across SentiLink systems and services, developing high quality monitoring capabilities while actively reducing false positives. Troubleshoot, debug, and resolve infrastructure issues as they arise; participate in on-call rotations for production issues. Define and refine Service Level Indicators (SLI), Service Level Objectives (SLO), and Service Level Agreements (SLA) in collaboration with product and engineering teams. Develop monitoring and alerting configurations using IaC solutions such as Terraform. Build and maintain dashboards to provide visibility into system performance and reliability. Collaborate with engineering teams to improve root cause analysis processes and reduce Mean Time to Recovery (MTTR). Drive cost optimization for observability tools like Datadog, CloudWatch, and Sumo Logic. Perform capacity testing to determine a deep understanding of infrastructure performance under load. Develop alerting based on learnings. Oversee, develop, and operate Kubernetes and service mesh infrastructure, ensuring smooth performance and reliability Investigate operational alerts, identify root causes, and compile comprehensive root cause analysis reports. Pursue action items relentlessly until they are thoroughly completed Conduct in-depth examinations of database operational issues, actively developing and improving database architecture, schema, and configuration for enhanced performance and reliability Develop and maintain incident response runbooks and improve processes to minimize service downtime. Research and evaluate new observability tools and technologies to enhance system monitoring. Requirements 5+ years of experience in cloud infrastructure, DevOps, or systems engineering. Expertise in AWS and infrastructure-as-code development. Experience with CI/CD pipelines and automation tools. Experience managing observability platforms, building monitoring dashboards, and configuring high quality, actionable alerting Strong understanding of Linux systems and networking. Familiarity with container orchestration tools like Kubernetes or Docker. Excellent analytical and problem-solving skills. Experience operating enterprise-size databases. Postgres, Aurora, Redshift, and OpenSearch experience is a plus Experience with Python or Golang is a plus Perks Employer paid group health insurance for you and your dependents 401(k) plan with employer match (or equivalent for non US-based roles) Flexible paid time off Regular company-wide in-person events Home office stipend, and more! Corporate Values Follow Through Deep Understanding Whatever It Takes Do Something Smart

Posted 1 day ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

At Seismic, we're proud of our engineering culture where technical excellence and innovation drive everything we do. We're a remote-first data engineering team responsible for the critical data pipeline that powers insights for over 2,300 customers worldwide. Our team manages all data ingestion processes, leveraging technologies like Apache Kafka, Spark, various C# microservices services, and a shift-left data mesh architecture to transform diverse data streams into the valuable reporting models that our customers rely on daily to make data-driven decisions. Additionally, we're evolving our analytics platform to include AI-powered agentic workflows. Who You Are Have working knowledge of one OO language, preferably C#, but won’t hold your Java expertise against you (you’re the type of person who’s interested in learning and becoming an expert at new things). Additionally, we’ve been using Python more and more, and bonus points if you’re familiar with Scala. Have experience with architecturally complex distributed systems. Highly focused on operational excellence and quality – you have a passion to write clean and well tested code and believe in the testing pyramid. Outstanding verbal and written communication skills with the ability to work with others at all levels, effective at working with geographically remote and culturally diverse teams. You enjoy solving challenging problems, all while having a blast with equally passionate team members. Conversant in AI engineering. You’ve been experimenting with building ai solutions/integrations using LLMs, prompts, Copilots, Agentic ReAct workflows, etc. At Seismic, we’re committed to providing benefits and perks for the whole self. To explore our benefits available in each country, please visit the Global Benefits page. Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here. Collaborating with experienced software engineers, data scientists and product managers to rapidly build, test, and deploy code to create innovative solutions and add value to our customers' experience. Building large scale platform infrastructure and REST APIs serving machine learning driven content recommendations to Seismic products. Leveraging the power of context in third-party applications such as CRMs to drive machine learning algorithms and models. Helping build next-gen Agentic tooling for reporting and insights Processing large amounts of internal and external system data for analytics, caching, modeling and more. Identifying performance bottlenecks and implementing solutions for them. Participating in code reviews, system design reviews, agile ceremonies, bug triage and on-call rotations. BS or MS in Computer Science, similar technical field of study, or equivalent practical experience. 3+ years of software development experience within a SaaS business. Must have a familiarity with .NET Core, and C# and frameworks. Experience in data engineering - building and managing Data Pipelines, ETL processes, and familiarity with various technologies that drive them: Kafka, FiveTran (Optional), Spark/Scala (Optional), etc. Data warehouse experience with Snowflake, or similar (AWS Redshift, Apache Iceberg, Clickhouse, etc). Familiarity with RESTFul microservice-based APIs Experience in modern CI/CD pipelines and infrastructure (Jenkins, Github Actions, Terraform, Kubernetes) a big plu (or equivalent) Experience with the SCRUM and the AGILE development process. Familiarity developing in cloud-based environments Optional: Experience with 3rd party integrations Optional: familiarity with Meeting systems like Zoom, WebEx, MS Teams Optional: familiarity with CRM systems like Salesforce, Microsoft Dynamics 365, Hubspot. If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Role: We are seeking a highly skilled and experienced Data Architect with expertise in designing and building data platforms in cloud environments. The ideal candidate will have a strong background in either AWS Data Engineering or Azure Data Engineering, along with proficiency in distributed data processing systems like Spark. Additionally, proficiency in SQL, data modeling, building data warehouses, and knowledge of ingestion tools and data governance are essential for this role. The Data Architect will also need experience with orchestration tools such as Airflow or Dagster and proficiency in Python, with knowledge of Pandas being beneficial. ‍ Why Choose Ideas2IT Ideas2IT has all the good attributes of a product startup and a services company. Since we launch our products, you will have ample opportunities to learn and contribute. However, single-product companies stagnate in the technologies they use. In our multiple product initiatives and customer-facing projects, you will have the opportunity to work on various technologies. AGI is going to change the world. Big companies like Microsoft are betting heavily on this (see here and here). We are following suit. ‍ What’s in it for you?‍ You will get to work on impactful products instead of back-office applications for the likes of customers like Facebook, Siemens, Roche, and more You will get to work on interesting projects like the Cloud AI platform for personalized cancer treatment Opportunity to continuously learn newer technologies Freedom to bring your ideas to the table and make a difference, instead of being a small cog in a big wheel Showcase your talent in Shark Tanks and Hackathons conducted in the company ‍ ‍Here’s what you’ll bring‍ Experience in designing and building data platforms in any cloud. Strong expertise in either AWS Data Engineering or Azure Data Engineering Develop and optimize data processing pipelines using distributed systems like Spark. • Create and maintain data models to support efficient storage and retrieval. Build and optimize data warehouses for analytical and reporting purposes, utilizing technologies such as Postgres, Redshift, Snowflake, etc. Knowledge of ingestion tools such as Apache Kafka, Apache Nifi, AWS Glue, or Azure Data Factory. Establish and enforce data governance policies and procedures to ensure data quality and security. Utilize orchestration tools like Airflow or Dagster to schedule and manage data workflows. Develop scripts and applications in Python to automate tasks and processes. Collaborate with stakeholders to gather requirements and translate them into technical specifications. Communicate technical solutions effectively to clients and stakeholders. Familiarity with multiple cloud ecosystems such as AWS, Azure, and Google Cloud Platform (GCP). Experience with containerization and orchestration technologies like Docker and Kubernetes. Knowledge of machine learning and data science concepts. Experience with data visualization tools such as Tableau or Power BI. Understanding of DevOps principles and practices. ‍ About Us: Ideas2IT stands at the intersection of Technology, Business, and Product Engineering, offering high-caliber Product Development services. Initially conceived as a CTO consulting firm, we've evolved into thought leaders in cutting-edge technologies such as Generative AI, assisting our clients in embracing innovation. Our forte lies in applying technology to address business needs, demonstrated by our track record of developing AI-driven solutions for industry giants like Facebook, Bloomberg, Siemens, Roche, and others. Harnessing our product-centric approach, we've incubated several AI-based startups—including Pipecandy, Element5, IdeaRx, and Carefi. in—that have flourished into successful ventures backed by venture capital. With fourteen years of remarkable growth behind us, we're steadfast in pursuing ambitious objectives.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App’s download from Apple’s and Google’s store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. The Software Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including design, development, testing. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a member of the technical staff with our Digital Meter Processing team, you will further develop the backend system that processes massive amounts of data every day, across 3 different AWS regions. Your role will involve designing, implementing, and maintaining robust, scalable solutions that leverage a Java based system that runs in an AWS environment. You will play a key role in shaping the technical direction of our projects and mentoring other team members. Qualifications Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation: Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship and Collaboration: Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 3 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Java, SQL and databases such as Postgres Minimum 2 years development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other Desirable Skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. At Workday, we value our candidates’ privacy and data security. Workday will never ask candidates to apply to jobs through websites that are not Workday Careers. Please be aware of sites that may ask for you to input your data in connection with a job posting that appears to be from Workday but is not. In addition, Workday will never ask candidates to pay a recruiting fee, or pay for consulting or coaching services, in order to apply for a job at Workday. About The Team We are currently seeking a Senior Business Intelligence Analyst to join the CX Analytics organization at Workday. This team is responsible for providing data insights that inform and influence our CX strategy and business decisions. We are looking for a problem solver that loves to analyze data and provide insights and recommendations for our internal customers. The ideal candidate is passionate about using data to solve exciting problems, shape business strategy, create actionable insights and measure results. You are intellectually curious, results driven, and have proven success in using analytics to drive the understanding, development, and success of Customer Service initiatives. About The Role You will: Lead interviews with key business leaders and stakeholders to deeply understand what business problems we are trying to solve, key questions to be answered, and how the tools developed will fit into the business process to be supported. Design, build, manage, and monitor reports, dashboards and metrics to visually represent results and deliver actionable insights and data driven decisions Analyze/curate large volumes of data using various tools like Tableau prep, SQL, or any other data-modeling tools etc. Develop interactive and easy-to-understand visualizations using best practices to effectively solve business problems by enabling business insights and making recommendations. Recommend definitions for new and updated metrics, and support metric data governance and documentation. Act as a trusted advisor when questions arise regarding BI solutions and metrics Partner with the BI Engineering team to define new or modified data models needed in the data warehouse. Be an expert in troubleshooting and resolving dashboard, data, and security issues reported by business users and fellow team members. Thoroughly QA new and modified data sources/dashboards for accuracy and functionality. Develop training collateral and deliver training to the end-users on new and existing dashboards Coach and mentor the less experienced members on the team About You Basic Qualifications Bachelor’s or master’s degree in computer science, Information Systems, or any other related field of study OR equivalent work experience 8+ years of work experience in business intelligence or analytics, or as a data architect 7+ years of experience with at least one leading Business Intelligence (BI) tool (e.g., Tableau, Power BI, Sigma) for dashboard and report development. 3+ years of deep, hands-on experience with Snowflake or aws redshift, including advanced query optimization, data modeling, and data governance. Extensive experience in building visualizations in Tableau, SQL, Sigma and data preparation required. Proficiency in Python or other scripting language. Other Qualifications Solid understanding of relational database concepts and data modeling. Excellent analytical and problem-solving skills combined with strong business discernment and an ability to communicate analysis in a clear and compelling manner. Able to work independently and in a team, meticulous, critical thinker and performance driven Proven experience working with business leaders to understand the business needs that can be answered with data Able to thrive in a fast paced, high energy and fun work environment and deliver value incrementally and frequently Experience with Agile methodology preferred Great teammate who excels at building relationships across the organization Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process!

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

On-site

Coursera was launched in 2012 by Andrew Ng and Daphne Koller with a mission to provide universal access to world-class learning. It is now one of the largest online learning platforms in the world, with 183 million registered learners as of June 30, 2025 . Coursera partners with over 350 leading university and industry partners to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera’s platform innovations enable instructors to deliver scalable, personalized, and verified learning experiences to their learners. Institutions worldwide rely on Coursera to upskill and reskill their employees, citizens, and students in high-demand fields such as GenAI, data science, technology, and business. Coursera is a Delaware public benefit corporation and a B Corp. Join us in our mission to create a world where anyone, anywhere can transform their life through access to education. We're seeking talented individuals who share our passion and drive to revolutionize the way the world learns. At Coursera, we are committed to building a globally diverse team and are thrilled to extend employment opportunities to individuals in any country where we have a legal entity. We require candidates to possess eligible working rights and have a compatible timezone overlap with their team to facilitate seamless collaboration. Coursera has a commitment to enabling flexibility and workspace choices for employees. Our interviews and onboarding are entirely virtual, providing a smooth and efficient experience for our candidates. As an employee, we enable you to select your main way of working, whether it's from home, one of our offices or hubs, or a co-working space near you. Job Overview: Does architecting high quality and scalable data pipelines powering business critical applications excite you? How about working with cutting edge technologies alongside some of the brightest and most collaborative individuals in the industry? Join us, in our mission to bring the best learning to every corner of the world! We’re looking for a passionate and talented individual with a keen eye for data to join the Data Engineering team at Coursera! Data Engineering plays a crucial role in building a robust and reliable data infrastructure that enables data-driven decision-making, as well as various data analytics and machine learning initiatives within Coursera. In addition, Data Engineering today owns many external facing data products that drive revenue and boost partner and learner satisfaction. You firmly believe in Coursera's potential to make a significant impact on the world, and align with our core values: Learners first: Champion the needs, potential, and progress of learners everywhere. Play for team Coursera: Excel as an individual and win as a team. Put Coursera’s mission and results before personal goals. Maximize impact: Increase leverage by focusing on things that produce bigger results with less effort. Learn, change, and grow: Move fast, take risks, innovate, and learn quickly. Invite and offer feedback with respect, courage, and candor. Love without limits: Celebrate the diversity and dignity of every one of our employees, learners, customers, and partners. Your Responsibilities Architect scalable data models and construct high quality ETL pipelines that act as the backbone of our core data lake, with cutting edge technologies such as Airflow, DBT, Databricks, Redshift, Spark. Your work will lay the foundation for our data-driven culture. Design, build, and launch self-serve analytics products. Your creations will empower our internal and external customers, providing them with rich insights to make informed decisions. Be a technical leader for the team. Your guidance in technical and architectural designs for major team initiatives will inspire others. Help shape the future of Data Engineering at Coursera and foster a culture of continuous learning and growth. Partner with data scientists, business stakeholders, and product engineers to define, curate, and govern high-fidelity data. Develop new tools and frameworks in collaboration with other engineers. Your innovative solutions will enable our customers to understand and access data more efficiently, while adhering to high standards of governance and compliance. Work cross-functionally with product managers, engineers, and business teams to enable major product and feature launches. Your Skills 5+ years experience in data engineering with expertise in data architecture and pipelines Strong programming skills in Python Proficient with relational databases, data modeling, and SQL Experience with big data technologies (eg: Hive, Spark, Presto) Familiarity with batch and streaming architectures preferred Hands-on experience with some of: AWS, Databricks, Delta Lake, Airflow, DBT, Redshift, Datahub, Elementary Knowledgeable on data governance and compliance best practices Ability to communicate technical concepts clearly and concisely Independence and passion for innovation and learning new technologies If this opportunity interest you, you might like these courses on Coursera - Big Data Specialization Data Warehousing for Business Intelligence IBM Data Engineering Professional Certificate Coursera is an Equal Employment Opportunity Employer and considers all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, age, marital status, national origin, protected veteran status, disability, or any other legally protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, please contact us at accommodations@coursera.org. For California Candidates, please review our CCPA Applicant Notice here. For our Global Candidates, please review our GDPR Recruitment Notice here.

Posted 1 day ago

Apply

3.0 years

6 - 8 Lacs

Hyderābād

On-site

- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling The ShipTech BI team is looking for a smart and ambitious individual to support developing the operational reporting structure in Amazon Logistics. The potential candidate will support analysis, improvement and creation of metrics and dashboards on Transportation by Amazon, In addition, they will work with internal customers at all levels of the organization – Operations, Customer service, HR, Technology, Operational Research. The potential candidate will enjoy the challenges and rewards of working in a fast-growing organization. This is a high visibility position. As an Amazon Data Business Intelligence Engineer you will be working in one of the world's largest and most complex data warehouse environments. You should have deep expertise in the design, creation, management, and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. You should be expert at designing, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. You should be able to work with business customers in understanding the business requirements and implementing reporting solutions. Above all you should be passionate about bringing large datasets together to answer business questions and drive change. Key Responsibilities: - Design automated solutions for recurrent reporting (daily/weekly/monthly). - Design automated processes for in-depth analysis databases. - Design automated data control processes. - Collaborate with the software development team to build the designed solutions. - Learn, publish, analyze and improve management information dashboards, operational business metrics decks and key performance indicators. - Improve tools, processes, scale existing solutions, create new solutions as required based on stakeholder needs. - Provide in-depth analysis to management with the support of accounting, finance, transportation and supply chain teams. - Participate in annual budgeting and forecasting efforts. - Perform monthly variance analysis and identify risks & opportunities. Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 day ago

Apply

4.0 years

4 - 6 Lacs

Hyderābād

On-site

Overview: We have an exciting role to head our creative studio for one of Omnicom’s largest advertising agency. This leadership role will require to lead and drive world-class advertising, creative and studio deliverables working with global brands and agency leaders. This role would be overall responsible for production, practice a people management. Ab Omnicom Global Solutions Omnicom Global Solutions (OGS) is an agile innovation hub of Omnicom Group, a leading global marketing and corporate communications company. Guided by the principles of Knowledge, Innovation, and Transformation, OGS is designed to deliver scalable, customized, and contextualized solutions that meet the evolving needs of our Practice Areas within Omnicom. OGS India plays a key role for our group companies and global agencies by providing stellar products, solutions, and services in the areas of Creative Services, Technology, Marketing Science (Data & Analytics), Advanced Analytics, Market Research, Business Support Services, Media Services, and Project Management. We currently have 4000+ awesome colleagues in OGS India who are committed to solving our clients’ pressing business issues. We are growing rapidly and looking for talented professionals like you to be part of this journey. Let us build this, together! Responsibilities: About our Agency Omnicom Health Shared Services Omnicom Health Group is the world’s largest and most diverse global healthcare network, pioneering solutions that shape a healthier future for all. At OHG, you’re not just part of a network—you’re part of a movement. Our ambition is to be the case study others aspire to, challenging the status quo and redefining what’s possible. With flagship locations globally, we deliver local expertise and groundbreaking healthcare solutions across consulting, strategy, creative, media, and more. Our 29 specialized companies work seamlessly to drive innovation with precision and impact. Know more at: https://omnicomhealthgroup.com/ The OGS-OH partnership empowers some of the world’s iconic brands with Knowledge, Innovation, and Transformation. When you join, you become part of a dynamic team that delivers high-impact solutions in the healthcare marketing and communications space. Here’s what makes us unique: We are a growing community that blends creativity, technology, and data-driven insights to transform healthcare. Bringing you the best of both worlds – our team partners with key OH strategists while staying rooted in OGS’ culture and values. Access to top healthcare and biopharmaceutical brands. Helping you own your career – unlock diverse learning and upskilling opportunities, along with personalized talent development programs. Empowering you with an inclusive, rewarding, and engaging work environment centred around your well-being. Qualifications: JD Shared by Agency: Reporting & Insights – Specialist (Subject Matter Expert) Function: Market Science Level: SME Experience Required: 4–6 years of experience in marketing analytics, reporting architecture, data pipeline optimization, or performance intelligence strategy 1. Role Summary As a Specialist (SME) in the Reporting & Insights team within Market Science, you will serve as a domain expert in building robust reporting frameworks, optimizing data flows, and enabling scalable reporting systems across clients and platforms. You will lead reporting innovations, consult on best practices, and ensure governance across measurement and dashboarding processes. Your expertise will directly influence the development of strategic performance reporting for Omnicom Health clients, ensuring insights are timely, trusted, and actionable. 2. Key Responsibilities Architect reporting ecosystems using BI tools and advanced analytics workflows. Standardize KPIs, data definitions, and visualization best practices across clients. Collaborate with data engineering teams to enhance data warehousing/reporting infrastructure. Drive adoption of reporting automation, modular dashboards, and scalable templates. Ensure compliance with data governance, privacy, and client reporting SLAs. Act as the go-to expert for dashboarding tools, marketing KPIs, and campaign analytics. Conduct training and peer reviews to improve reporting maturity across teams. 3. Skills & Competencies Skill / Competency Proficiency Level Must-Have / Good-to-Have Criticality Index BI Tools Mastery (Power BI, Tableau) Advanced Must-Have High Data Architecture & ETL Intermediate Must-Have High Cross-Platform Reporting Logic Advanced Must-Have High Stakeholder Consulting Advanced Must-Have High Data Governance & QA Intermediate Must-Have High Leadership & Influence Intermediate Must-Have Medium Training & Enablement Intermediate Good-to-Have Medium 4. Day-to-Day Deliverables Will Include Designing and reviewing dashboards for performance, scalability, and accuracy Standardizing metrics, filters, and visualizations across platforms and markets Troubleshooting data discrepancies and establishing QA protocols Supporting onboarding of new clients or business units into the reporting framework Publishing playbooks and SOPs on reporting automation and delivery standards Conducting stakeholder walkthroughs and enablement sessions 5. Key Attributes for Success in This Role Strategic thinker with a hands-on approach to reporting and automation High attention to detail and process consistency Confident in translating business needs into scalable BI solutions Adaptable to changing client needs, tools, and data environments Collaborative, yet assertive in driving reporting excellence 6. Essential Tools/Platforms & Certifications Tools : Power BI, Advance Excel, Redshift , Alteryx (basics) Certifications : Power BI/Tableau Professional, , Data Engineering/ETL certifications – Preferred

Posted 1 day ago

Apply

7.0 years

6 - 9 Lacs

Thiruvananthapuram

On-site

7 - 9 Years 2 Openings Trivandrum Role description Senior Data Engineer – Azure/Snowflake Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: o Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. o Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. o Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. o Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. o Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 7+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: o Python for scripting and ETL orchestration o SQL for complex data transformation and performance tuning in Snowflake o Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Aws,Azure Data Lake,Python About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 day ago

Apply

9.0 years

5 - 10 Lacs

Thiruvananthapuram

On-site

9 - 12 Years 1 Opening Trivandrum Role description Tech Lead – Azure/Snowflake & AWS Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: o Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. o Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. o Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. o Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. o Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 9+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: o Python for scripting and ETL orchestration o SQL for complex data transformation and performance tuning in Snowflake o Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 day ago

Apply

4.0 - 5.0 years

5 - 9 Lacs

Noida

On-site

Job Information: Work Experience: 4-5 years Industry: IT Services Job Type: FULL TIME Location: Noida, India Job Overview: We are seeking a skilled Data Engineer with 4-5 years of experience to design, build, and maintain scalable data pipelines and analytics solutions within the AWS cloud environment. The ideal candidate will leverage AWS Glue, PySpark, and QuickSight to deliver robust data integration, transformation, and visualization capabilities. This role is critical in supporting business intelligence, analytics, and reporting needs across the organization. Key Responsibilities: Design, develop, and maintain data pipelines using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources. Build and optimize data warehouse/data lake infrastructure on AWS, ensuring efficient data storage, processing, and retrieval. Develop and manage ETL processes to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting. Implement and maintain business intelligence dashboards using Amazon QuickSight, enabling stakeholders to derive actionable insights. Collaborate with cross-functional teams (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions. Ensure data quality, integrity, and security throughout the data lifecycle, implementing best practices for governance and compliance. Support self-service analytics by empowering internal users to access and analyze data through QuickSight and other reporting tools. Troubleshoot and resolve data pipeline issues, optimizing performance and reliability as needed. Required Skills & Qualifications: Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies. Strong experience with PySpark for large-scale data processing and transformation. Expertise in SQL and data modeling for relational and non-relational databases. Experience building and optimizing ETL pipelines and data integration workflows. Familiarity with business intelligence and visualization tools, especially Amazon QuickSight. Knowledge of data governance, security, and compliance best practices. Strong programming skills in Python; experience with automation and scripting. Ability to work collaboratively in agile environments and manage multiple priorities effectively. Excellent problem-solving and communication skills. Preferred Qualifications: AWS certification (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer). Good to Have Skills: Understanding of machine learning, deep learning and Generative AI concepts, Regression, Classification, Predictive modeling, Clustering, Deep Learning. Interview Process Internal Assessment 3 Technical Rounds

Posted 1 day ago

Apply

2.0 years

3 - 10 Lacs

India

Remote

Job Title - Sr. Data Engineer Experience - 2+ Years Location - Indpre (onsite) Industry - IT Job Type - Full ime Roles and Responsibilities- 1. Design and develop scalable data pipelines and workflows for data ingestion, transformation, and integration. 2. Build and maintain data storage systems, including data warehouses, data lakes, and relational databases. 3. Ensure data accuracy, integrity, and consistency through validation and quality assurance processes. 4. Collaborate with data scientists, analysts, and business teams to understand data needs and deliver tailored solutions. 5. Optimize database performance and manage large-scale datasets for efficient processing. 6. Leverage cloud platforms (AWS, Azure, or GCP) and big data technologies (Hadoop, Spark, Kafka) for building robust data solutions. 7. Automate and monitor data workflows using orchestration frameworks such as Apache Airflow. 8. Implement and enforce data governance policies to ensure compliance and data security. 9. Troubleshoot and resolve data-related issues to maintain seamless operations. 10. Stay updated on emerging tools, technologies, and trends in data engineering. Skills and Knowledge- 1. Core Skills: ● Proficient in Python (libraries: Pandas, NumPy) and SQL. ● Knowledge of data modeling techniques, including: ○ Entity-Relationship (ER) Diagrams ○ Dimensional Modeling ○ Data Normalization ● Familiarity with ETL processes and tools like: ○ Azure Data Factory (ADF) ○ SSIS (SQL Server Integration Services) 2. Cloud Expertise: ● AWS Services: Glue, Redshift, Lambda, EKS, RDS, Athena ● Azure Services: Databricks, Key Vault, ADLS Gen2, ADF, Azure SQL ● Snowflake 3. Big Data and Workflow Automation: ● Hands-on experience with big data technologies like Hadoop, Spark, and Kafka. ● Experience with workflow automation tools like Apache Airflow (or similar). Qualifications and Requirements- ● Education: ○ Bachelor’s degree (or equivalent) in Computer Science, Information Technology, Engineering, or a related field. ● Experience: ○ Freshers with strong understanding, internships and relevant academic projects are welcome. ○ 2+ years of experience working with Python, SQL, and data integration or visualization tools is preferred. ● Other Skills: ○ Strong communication skills, especially the ability to explain technical concepts to non-technical stakeholders. ○ Ability to work in a dynamic, research-oriented team with concurrent projects. Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,000,000.00 per year Benefits: Paid sick time Provident Fund Work from home Schedule: Day shift Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Niranjanpur, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Engineer: 2 years (Preferred) Work Location: In person Application Deadline: 31/08/2025

Posted 1 day ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description: L1 Support – Data Engineering (Full Time WFO) Location: Noida Work Mode: Noida Office | 6 days/week | 24x7x365 support (rotational shifts) Salary Range - Between INR 2.5 to 3 Lacs Per Annum Experience: 2 years Language: English proficiency mandatory About the Role We're looking for an experienced and motivated L1 Support Engineer – Data Engineering to join our growing team. If you have solid exposure to AWS , SQL , and Python scripting , and you're ready to thrive in a 24x7 support environment—this role is for you! What You’ll Do Monitor and support AWS services (S3, EC2, CloudWatch, IAM) Handle SQL-based issue resolution and data analysis Run and maintain Python scripts ; Shell scripting is a plus Support ETL pipelines and data workflows Monitor Apache Airflow DAGs and resolve basic issues Collaborate with cross-functional and multicultural teams What We’re Looking For B.Tech or MCA preferred , but candidates with a Bachelor’s degree in any field and the right skillset are welcome to apply. 2 years of Data Engineering Support or similar experience Strong skills in AWS , SQL , Python , and ETL processes Familiarity with data warehousing (Amazon Redshift or similar) Ability to work rotational shifts in a 6-day, 24x7 environment Excellent communication and problem-solving skills English fluency is required

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices.

Posted 1 day ago

Apply

10.0 - 12.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

TCS present an excellent opportunity for Data architect Job Description: Skills: AWS, Glue, Redshift, PySpark Location: Pune / Kolkata Experience: 10 to 12 Years Strong hands-on experience in Python programming and PySpark. Experience using AWS services (RedShift, Glue, EMR, S3 & Lambda) Experience working with Apache Spark and Hadoop ecosystem. Experience in writing and optimizing SQL for data manipulations. Good Exposure to scheduling tools. Airflow is preferable. Must – Have Data Warehouse Experience with AWS Redshift or Hive. Experience in implementing security measures for data protection. Expertise to build/test complex data pipelines for ETL processes (batch and near real time) Readable documentation of all the components being developed. Knowledge of Database technologies for OLTP and OLAP workloads.

Posted 1 day ago

Apply

4.0 years

0 Lacs

Singapore

Remote

📊 We’re Hiring: Data Manager | Based in Singapore / Remote-Friendly 📍 Location: Singapore (On-site / Hybrid / Remote options available) 🕒 Employment Type: Full-time 💼 Seniority: Mid to Senior Level Are you a data-driven leader with a passion for transforming information into actionable insight? We’re looking for an experienced Data Manager to oversee data operations, ensure quality and compliance, and lead data strategy across teams. 🎯 Key Responsibilities: Lead and manage the end-to-end lifecycle of data across systems, teams, and departments Design and maintain scalable data infrastructure, pipelines, and warehousing solutions Establish and enforce data governance, integrity, and security policies Work with stakeholders (business, tech, and analytics teams) to ensure data needs are met Collaborate with analysts, engineers, and product teams to support reporting and insights Monitor KPIs and data quality metrics to drive continuous improvement Stay updated on data privacy laws (e.g., GDPR, PDPA) and compliance best practices ✅ Requirements: 4+ years of experience in data management, data operations, or analytics leadership roles Strong SQL and data modeling skills Familiarity with data platforms such as Snowflake, BigQuery, Redshift, or similar Experience with BI tools (e.g., Tableau, Power BI, Looker) Understanding of data governance, lineage, and metadata management Excellent communication and leadership skills Based in Singapore or willing to work Singapore business hours if remote 🌟 Nice to Have: Experience managing data teams or cross-functional data projects Familiarity with Python, R, or cloud platforms (AWS/GCP/Azure) Background in regulated industries (e.g., finance, healthcare, pharma) 🎁 What We Offer: Competitive salary and performance bonuses Flexible working arrangements (remote/hybrid) Strong data culture and senior management support Career growth into head of data, data strategy, or CTO track Friendly and collaborative work environment

Posted 1 day ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Sr. Software Engineer (SDE-2) DViO is one of the largest independent, highly awarded, tech-first marketing companies with a team of 175+ people operating across India, Middle East and South East Asia. We are looking for a Senior Software Engineer (SDE-2) to join our team. The ideal candidate will have a strong background in software development and experience with both frontend and backend technologies. We are looking for someone who is also passionate about solving challenging problems through code and is looking to grow in this field. Responsibilities Lead technical design sessions, establish coding standards, and conduct code reviews. Contribute hands‑on to feature development, refactoring, and performance improvements. Mentor and upskill junior engineers through pair programming, feedback, and structured learning plans. Maintain and evolve our deployment pipelines on our cloud stack. Oversee ETL workflows and ensure data freshness, integrity, and observability. Integrate and optimize AI capabilities within the product. Collaborate closely with Product and Design to translate requirements into robust technical solutions. Champion best practices in testing, security, and documentation. Requirements Qualifications: Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field. 4–6 years of professional software development experience, including ≥1 year mentoring or leading engineers. Strong computer‑science fundamentals in data structures, algorithms, and system design. Deep understanding of object-oriented/functional design and relational databases. Proficiency in one or more modern web stacks and comfort navigating both backend and frontend codebases. Proven ability to balance delivery speed with long-term maintainability; strong written and verbal communication skills. Must-have skills: Backend: Proficiency in at least one of Python, Node.js, PHP, Go, or Java; experience with an MVC or equivalent framework. Frontend: Proficiency in React, Next.js, Vue, or Angular; deep knowledge of HTML5, CSS3, and modern JavaScript/TypeScript. APIs & Data: Designing and consuming RESTful APIs; working with Relational databases (MySQL, PostgreSQL, etc.). Containers & Cloud: Docker‑based development and deployment; basic familiarity with AWS, GCP, or Azure services; CI/CD using GitHub Actions, GitLab CI, or similar. Quality & DevEx: Unit/integration testing, Git‑based workflows, and peer code reviews. Good‑to‑Have Skills Practical experience integrating LLM APIs (OpenAI, Anthropic) into applications, including prompt design and cost/performance considerations. Hands‑on experience with data engineering, ETL pipelines, and warehouse querying; comfort debugging data issues. UI component libraries (shadcn/ui, Chakra UI, Radix UI) and CSS frameworks (TailwindCSS, Bootstrap). Data‑visualization libraries (D3.js, Chart.js, Recharts). Caching (Redis, Memcached) and search systems (Elasticsearch, Meilisearch, Typesense). Data warehouses or lakes (Snowflake, BigQuery, Redshift) and SQL performance tuning. Bash scripting and strong Linux system knowledge.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Position: Sr Data Operations Years of Experience – 6-8 Years Job Location: S.B Road –Pune, For other locations (Remote) The Position We are seeking a seasoned engineer with a passion for changing the way millions of people save energy. You’ll work within the Deliver and Operate team to build and improve our platforms to deliver flexible and creative solutions to our utility partners and end users and help us achieve our ambitious goals for our business and the planet. We are seeking a highly skilled and detail-oriented Software Engineer II for Data Operations team to maintain our data infrastructure, pipelines, and work-flows. You will play a key role in ensuring the smooth ingestion, transformation, validation, and delivery of data across systems. This role is ideal for someone with a strong understanding of data engineering and operational best practices who thrives in high-availability environments. Responsibilities & Skills You should: Monitor and maintain data pipelines and ETL processes to ensure reliability and performance. Automate routine data operations tasks and optimize workflows for scalability and efficiency. Troubleshoot and resolve data-related issues, ensuring data quality and integrity. Collaborate with data engineering, analytics, and DevOps teams to support data infrastructure. Implement monitoring, alerting, and logging systems for data pipelines. Maintain and improve data governance, access controls, and compliance with data policies. Support deployment and configuration of data tools, services, and platforms. Participate in on-call rotation and incident response related to data system outages or failures. Required Skills : 5+ years of experience in data operations, data engineering, or a related role. Strong SQL skills and experience with relational databases (e.g., PostgreSQL, MySQL). Proficiency with data pipeline tools (e.g., Apache Airflow). Experience with cloud platforms (AWS, GCP) and cloud-based data services (e.g., Redshift, BigQuery). Familiarity with scripting languages such as Python, Bash, or Shell. Knowledge of version control (e.g., Git) and CI/CD workflows. Qualifications Bachelor's degree in Computer Science, Engineering, Data Science, or a related field. Experience with data observability tools (e.g., Splunk, DataDog). Background in DevOps or SRE with focus on data systems. Exposure to infrastructure-as-code (e.g., Terraform, CloudFormation). Knowledge of streaming data platforms (e.g., Kafka, Spark Streaming).

Posted 1 day ago

Apply

12.0 years

0 Lacs

India

On-site

We are seeking a highly skilled and experienced AWS Architect with a strong background in Data Engineering and expertise in Generative AI. In this pivotal role, you will be responsible for designing, building, and optimizing scalable, secure, and cost-effective data solutions that leverage the power of AWS services, with a particular focus on integrating and managing Generative AI capabilities. The ideal candidate will possess a deep understanding of data architecture principles, big data technologies, and the latest advancements in Generative AI, including Large Language Models (LLMs) and Retrieval Augmented Generation (RAG). You will work closely with data scientists, machine learning engineers, and business stakeholders to translate complex requirements into robust and innovative solutions on the AWS platform. Responsibilities: • Architect and Design: Lead the design and architecture of end-to-end data platforms and pipelines on AWS, incorporating best practices for scalability, reliability, security, and cost optimization. • Generative AI Integration: Architect and implement Generative AI solutions using AWS services like Amazon Bedrock, Amazon SageMaker, Amazon Q, and other relevant technologies. This includes designing RAG architectures, prompt engineering strategies, and fine-tuning models with proprietary data (knowledge base). • Data Engineering Expertise: Design, build, and optimize ETL/ELT processes for large-scale data ingestion, transformation, and storage using AWS services such as AWS Glue, Amazon S3, Amazon Redshift, Amazon Athena, Amazon EKS and Amazon EMR. • Data Analytics: Design, build, and optimize analytical solutions for large-scale data ingestion, analytics and insights using AWS services such as AWS Quicksight • Data Governance and Security: Implement robust data governance, data quality, and security measures, ensuring compliance with relevant regulations and industry best practices for both traditional data and Generative AI applications. • Performance Optimization: Identify and resolve performance bottlenecks in data pipelines and Generative AI workloads, ensuring efficient resource utilization and optimal response times. • Technical Leadership: Act as a subject matter expert and provide technical guidance to data engineers, data scientists, and other team members. Mentor and educate on AWS data and Generative AI best practices. • Collaboration: Work closely with cross-functional teams, including product owners, data scientists, and business analysts, to understand requirements and deliver impactful solutions. • Innovation and Research: Stay up-to-date with the latest AWS services, data engineering trends, and advancements in Generative AI, evaluating and recommending new technologies to enhance our capabilities. • Documentation: Create comprehensive technical documentation, including architectural diagrams, design specifications, and operational procedures. • Cost Management: Monitor and optimize AWS infrastructure costs related to data and Generative AI workloads. Required Skills and Qualifications: • 12+ years of experience in data engineering, data warehousing, or big data architecture. • 5+ years of experience in an AWS Architect role, specifically with a focus on data. • Proven experience designing and implementing scalable data solutions on AWS. • Strong hands-on experience with core AWS data services, including: o Data Storage: Amazon S3, Amazon Redshift, Amazon DynamoDB, Amazon RDS o Data Processing: AWS Glue, Amazon EMR, Amazon EKS, AWS Lambda, Informatica o Data Analytic: Amazon Quicksight, Amazon Athena, Tableau o Data Streaming: Amazon Kinesis, AWS MSK o Data Lake: AWS Lake Formation • Strong competencies in Generative AI, including: o Experience with Large Language Models (LLMs) and Foundation Models (FMs). o Hands-on experience with Amazon Bedrock (including model customization, agents, and orchestrations). o Understanding and experience with Retrieval Augmented Generation (RAG) architectures and vector databases (e.g., Amazon OpenSearch Service for vector indexing). o Experience with prompt engineering and optimizing model responses. o Familiarity with Amazon SageMaker for building, training, and deploying custom ML/Generative AI models. o Knowledge of Amazon Q for business-specific Generative AI applications. • Proficiency in programming languages such as Python (essential), SQL, and potentially Scala or Java. • Experience with MLOps/GenAIOps principles and tools for deploying and managing Generative AI models in production. • Solid understanding of data modeling, data warehousing concepts, and data lake architectures. • Experience with CI/CD pipelines and DevOps practices on AWS. • Excellent communication, interpersonal, and presentation skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences. • Strong problem-solving and analytical abilities. Preferred Qualifications: • AWS Certified Solutions Architect – Professional or AWS Certified Data Engineer – Associate/Specialty. • Experience with other Generative AI frameworks (e.g., LangChain) or open-source LLMs. • Familiarity with containerization technologies like Docker and Kubernetes (Amazon EKS). • Experience with data transformation tools like Informatica, Matillion • Experience with data visualization tools (e.g., Amazon QuickSight, Tableau, Power BI). • Knowledge of data governance tools like Amazon DataZone. • Experience in a highly regulated industry (e.g., Financial Services, Healthcare).

Posted 1 day ago

Apply

10.0 years

0 Lacs

India

Remote

Job description #hiring #Senior Backend Developer Min Experience: 10+ Years Location: Remote We are seeking a highly experienced Technical Lead with over 10 years of experience, including at least 2 years in a leadership role, to guide and mentor a dynamic engineering team. This role is critical to designing, developing, and optimizing high-performance, scalable, and reliable backend systems. The ideal candidate will have deep expertise in Python (Flask), AWS (Lambda, Redshift, Glue, S3), Microservices, and Database Optimization (SQL, RDBMS). We operate in a high-performance environment, comparable to leading product companies, where uptime, defect reduction, and data clarity are paramount. As a Technical Lead, you will ensure engineering excellence, maintain high-quality standards, and drive innovation in software architecture and development. Key Responsibilities: · Own backend architecture and lead the development of scalable, efficient web applications and microservices. · Ensure production-grade AWS deployment and maintenance with high availability, cost optimization, and security best practices. · Design and optimize databases (RDBMS, SQL) for performance, scalability, and reliability. · Lead API and microservices development, ensuring seamless integration, scalability, and maintainability. · Implement high-performance solutions, emphasizing low latency, uptime, and data accuracy. · Mentor and guide developers, fostering a culture of collaboration, disciplined coding, and technical excellence. · Conduct technical reviews, enforce best coding practices, and ensure adherence to security and compliance standards. · Drive automation and CI/CD pipelines to enhance deployment efficiency and reduce operational overhead. · Communicate technical concepts effectively to technical and non-technical stakeholders. · Provide accurate work estimations and align development efforts with broader business objectives. Key Skills: Programming: Strong expertise in Python (Flask) and Celery. AWS: Core experience with Lambda, Redshift, Glue, S3, and production-level deployment strategies. Microservices & API Development: Deep understanding of architecture, service discovery, API gateway design, observability, and distributed systems best practices. Database Optimization: Expertise in SQL, PostgreSQL, Amazon Aurora RDS, and performance tuning. CI/CD & Infrastructure: Experience with GitHub Actions, GitLab CI/CD, Docker, Kubernetes, Terraform, and CloudFormation. Monitoring & Logging: Familiarity with AWS CloudWatch, ELK Stack, and Prometheus. Security & Compliance: Knowledge of backend security best practices and performance optimization. Collaboration & Communication: Ability to articulate complex technical concepts to international stakeholders and work seamlessly in Agile/Scrum environments. 📩 Apply now or refer someone great. Please share your updated resume to hr.team@kpitechservices.com #PythonJob #jobs #BackendDeveloper

Posted 1 day ago

Apply

10.0 - 14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

What Gramener offers you Gramener will offer you an inviting workplace, talented colleagues from diverse backgrounds, career path, steady growth prospects with great scope to innovate. Our goal is to create an ecosystem of easily configurable data applications focused on storytelling for public and private use Cloud Lead – Analytics & Data Products We’re looking for a Cloud Architect/Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Roles and Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions. Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB. Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA). Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config. Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Skills and Qualifications: 10-14 years of experience in cloud engineering, DevOps, or cloud architecture roles. Hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python, Bash) and infrastructure automation. Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate. Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred.

Posted 1 day ago

Apply

10.0 years

0 Lacs

Delhi, India

On-site

YOE : 10 YEARS TO 15 YEARS SKILLS REQUIRED : Java, python, HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto, HLD, LLD, SQL, NOSQL, MongoDB, etc PREFERENCE : Tier 1 college/universities Role & Responsibilities Lead and mentor a team of data engineers, ensuring high performance and career growth. Architect and optimize scalable data infrastructure, ensuring high availability and reliability. Drive the development and implementation of data governance frameworks and best practices. Work closely with cross-functional teams to define and execute a data roadmap. Optimize data processing workflows for performance and cost efficiency. Ensure data security, compliance, and quality across all data platforms. Foster a culture of innovation and technical excellence within the data team. Ideal Candidate Candidates from TIER 1 college preferred MUST have Experience in Product startups , and should have implemented Data Engineering systems from an early stage in the Company MUST have 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role. MUST have Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS. MUST be Proficiency in SQL, Python, and Scala for data processing and analytics. Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services. MUST have Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice MUST have Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks. Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.). Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB. Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK. Proven ability to drive technical strategy and align it with business objectives. Strong leadership, communication, and stakeholder management skills. Candidates from TIER 1 college preferred Preferred Qualifications: Experience in machine learning infrastructure or MLOps is a plus. Exposure to real-time data processing and analytics. Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture. Prior experience in a SaaS or high-growth tech company.

Posted 1 day ago

Apply

6.0 - 8.0 years

0 Lacs

Greater Kolkata Area

On-site

At Cision, we believe in empowering every individual to make an impact. Here, your voice is heard, your ideas are valued, and your unique perspective fuels our collective success. As part of our global team, you'll thrive in an environment that champions curiosity, collaboration, and innovation, all while making meaningful contributions to the brands we accelerate. Join us in shaping the future of communication and building authentic connections that matter. Whether you're solving complex problems or driving bold innovations, your growth is our success, and together, we’ll create the conversations of tomorrow. Empower your impact at Cision. Be seen, be understood, be you. Job Summary The Manager, Business Intelligence position participates in the development of a data strategy to quickly cultivate a data-driven culture across the organization and to optimize our business performance by identifying growth opportunities and highlighting areas for improvement. The role will proactively communicate with stakeholders, team members and partners to support a high-performing team responsible for providing sales intelligence and data visualizations by leveraging business intelligence tools. Essential Duties And Responsibilities Coordinate and align priorities with the organization's strategic goals, partnering with business leadership to identify data and analytical needs via Value Approval Deliver the business intelligence strategy that combines data visualization to make profitable, data-driven decisions As a backbone to all things BI, establish and maintain high data integrity, quality, and governance standards. Craft data management practices are in place to support accurate and reliable data analysis. Develop dashboards that provide up to date information to sales leaders and sales associates on KPIs and other business objectives and goals. Translate intricate datasets into intuitive and insightful visualizations that drive databased decision-making across the organization. Distill insights from data and communicate recommendations to business customers Oversee the selection, implementation, and management of BI tools and technologies. Lead the creation and maintenance of reports, dashboards, and other data visualizations. Translating raw data into visual contexts that is easy for business customers to interpret Oversee BI projects/enhancements from inception to completion, ensuring they are delivered on time and within budget. Present data insights to stakeholders and business leaders clearly and in a relatable way Influence key decisions that would affect business decisions Maintain an accurate data portfolio that includes high quality dashboards and data models Mentor and upskill team members including data analysts, system admin and BI Developers Participate in the exploration and evaluation of emerging reporting tools, technologies, and methodologies to drive innovation and leverage best practices to advance the organization's BI capabilities. Minimum Required Qualifications Bachelor’s degree in computer science, Information Systems, Business Administration, or a related field. A master’s degree or an MBA can be advantageous. 6 - 8 years of experience in Data Management or Visualization 6- 8 years of experience in a high-functioning, fast-paced work environment with strong business acumen. 3-5 years of people leaders with high social intelligence Manage and mentor junior data analysts and BI developers. Proven expertise in executing data management, reporting & visualization in Domo. Secondarily Power BI and Tableau. Experience with Amazon Redshift and DBT desired. Proficient in Microsoft Office Suite Knowledge of complex data integration from multiple data sources Experience with Statistics and Probability Excellent verbal and written communication skills to translate complex data into easyto-understand, practical terms that every person can understand Deep understanding of data governance, compliance and privacy best practices Agility to changing priorities and situations High attention to detail and accuracy To be successful in this role, candidates must have demonstrated experience in organizing data in a way that allows business leaders to make informed decisions and reach their full potential by leveraging timely and accurate data. As a global leader in PR, marketing and social media management technology and intelligence, Cision helps brands and organizations to identify, connect and engage with customers and stakeholders to drive business results. PR Newswire, a network of over 1.1 billion influencers, in-depth monitoring, analytics and its Brandwatch and Falcon.io social media platforms headline a premier suite of solutions. Cision has offices in 24 countries throughout the Americas, EMEA and APAC. For more information about Cision's award-winning solutions, including its next-gen Cision Communications Cloud®, visit www.cision.com and follow @Cision on Twitter. Cision is committed to fostering an inclusive environment where all employees can be their authentic selves and perform at their best. We believe diversity, equity, and inclusion is vital to driving our culture, sparking innovation and achieving long-term success. Cision is proud to have joined more than 600 companies in signing the CEO Action for Diversity & Inclusion™ pledge and named a “Top Diversity Employer” for 2021 by DiversityJobs.com. Cision is proud to be an equal opportunity employer, seeking to create a welcoming and diverse environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender identity or expression, sexual orientation, national origin, genetics, disability, age, veteran status, or other protected statuses. Cision is committed to the full inclusion of all qualified individuals. In keeping with our commitment, Cision will take the steps to assure that people with disabilities are provided reasonable accommodations. Accordingly, if reasonable accommodation is required to fully participate in the job application or interview process, to perform the essential functions of the position, and/or to receive all other benefits and privileges of employment, please contact hr.support@cision.com Please review our Global Candidate Data Privacy Statement to learn about Cision’s commitment to protecting personal data collected during the hiring process.

Posted 1 day ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

MicroStrategy Senior Developer Experience - 7–10 years Description - 7-10 years of experience in designing and building BI solutions. Must have expertise in MicroStrategy Desktop/Web/Server, strong SQL and data modeling skills, and a working knowledge of AWS Redshift functions. Experience in dashboard/report development, data integration, and performance tuning is essential. Key Skills: MicroStrategy (Desktop, Web, Intelligence Server, Mobile) SQL, Data Modeling (Dimensional), Data Integration Report & Dashboard Development, Performance Optimization AWS Redshift (functions, integration) Strong analytical and communication skills Preferred: Experience with Power BI and ability to switch between tools MicroStrategy certifications

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies